An important aspect of reactive approach to concurrent programming is non-blocking processing. This post compares blocking vs non-blocking processing in general terms to highlight reactive idea in a nutshell.

Blocking Processing

Blocking (synchronous) processing has several characteristics:

  • Bound to the processing thread

  • Processing thread is waiting in case any I/O operation is performed

blocking processing

Under highload this approach has following consequences:

  • CPU & RAM resources are wasted, while thread is waiting to the I/O results.

  • If all threads are waiting, new user requests are either put to the queue or dropped down. This leads to poor user experience.

  • If all threads are waiting, service becomes unresponsive for API clients. This leads to timeouts and API clients failure. Basically, failure leads to more failure.

Non-Blocking Processing

Non-Blocking (aka reactive) processing has several characteristics:

  • Not bound to specific processing thread

  • Threads are not waiting in case I/O operation is performed

  • Threads are reused between calls

nonblocking processing

Under highload this approach has following consequences:

  • High CPU & RAM utilization

  • Less threads are needed to serve same number of requests as in blocking case

However, non-blocking procesing comes with a cost:

  • Backend design is complicated, since the need to track origin and arrival of responses & errors. This require new design patterns to be employed (hopefully, wrapped into frameworks like RxJava and Project Reactor).

  • Frontend design is complicated, since responses will come asynchronously via websockets, server-sent events, etc.


  • In both cases response time is limited by I/O operations (filesystem, database, network) and response time of downstream services.

  • Threads used for non-blocking processing don’t wait for I/O operations to complete. This gives better resource utilization and increases throughput, compared to blocking processing.

Oleksii Zghurskyi