Skip to content

Arvind Ravi

Concurrency and iOS

Technology, Software4 min read

The Problem

Concurrency is the idea of multiple things happening at once. With increasing number of cores in the CPUs of our devices, software developers need to be able to take advantage of this. In the world of iOS software development, its imperative that we make the experience of using apps as seamless as possible. Often, the differences between a good software product and a great one are things very specific to performance. And as users, we know how much we appreciate using high quality snappy apps.

The problem of being able to handle the increasing number of cores, without having to deal with a lot of overhead has been prevalent. Historically, we’ve had this problem and have resorted to certain techniques like using threads to solve it.

The Era of Threading

The concept of threads has been around for very many years now. Threads, in computer programming can be visualised as pathways or channels for execution of code. We, as developers, could create a thread and run code inside of it.

This seemed pretty handy at first, creating threads for different purposes, and keeping code performant. But then, trying to build a scalable system this way was rough. The responsibility of creating threads, managing the number of threads as the conditions change, rests on the developer and managing such a system was cumbersome.

The Asynchronous Approach

The Asynchronous Design approach, as you may already be familiar with deals with functions that are used to initiate tasks that take time to complete, during which the control is transferred back without having to wait for the function to execute. This style of system design has been prevalent for a long time at the operating system level.

To understand asynchronous functions better, I’m going to attempt giving an analogy — Imagine we’re running a burger joint, and there are 5 customers waiting to place an order. We have two choices:

Choice One:

  • Take an order
  • Cook burger
  • Serve
  • Repeat

This would mean, that for 5 customers that are waiting to place an order, we’re going to serve them each independently making them wait until their burgers are cooked one-by-one.

This will take a long time.

Let's introduce asynchronous processing to this system.

Choice Two:

  • Take orders as they come
  • Start cooking the burgers parallely as the orders are being taken
  • Start serving the burgers as they are ready
  • Repeat

This would mean, the customers have to wait for a shorter period of time, and we’re able to serve more people at the same time.

Do you see how different and better the process is? We apply a very similar approach in software design while using asynchronous functions:

When asynchronous tasks are initiated, the asynchronous function does the following:

  • Gets hold of a background thread
  • Starts the task
  • Sends a notification to the caller

This way we have access to a high-level API to perform concurrent tasks without having to deal with threads manually.

Enter iOS

In the iOS world, we have two very powerful APIs for designing concurrent patterns:

  • Grand Central Dispatch
  • Operation Queues

Grand Central Dispatch, often fondly called GCD, is a high level API for creating asynchronous tasks. The system now takes care of management of threads relieving the developer of the tedious task. GCD uses Dispatch Queues to implement concurrency.

Operation Queues are similar to Dispatch Queues, except unlike Dispatch Queues, various factors can be defined before tasks are executed so that task execution can be dependant on certain factors or other tasks.

Dispatch Queues

Dispatch Queues are basically queues that can take in tasks, and execute them asynchronously. They are able to do this both serially, and concurrently in a FIFO fashion.


Once we’ve created a Dispatch Queue, we can add in statements within the async block like so:

Example Scenario:

Let’s assume we’ve got to display a picture in an ImageView using a URL, and we want to do this asynchronously.

In this case we would do something like this:

  1. Create a Dispatch Queue
  1. Grab the image from the URL within the Queue

Here, we see that we initiate the task within the background dispatch queue, and pass back control to the main queue when setting the image, this is to make sure that we don’t do any UI related tasks within the background queue which would affect performance.

Operation Queues

Operations are a way to encapsulate work to be performed asynchronously, the Operation class helps in creating operations, and they can be used independently or with Operation Queues. An Operation can depend on other operations, and it’s an excellent way to execute them in a specific order. Dependencies can be added or removed using the addDependency(_:) and removeDependency(_:) methods.

Operation Queues are simply queues that hold operation objects, and execute them based on the operation’s priorities or criteria. To specify priorities on an operation, we could leverage the queuePriority property. However, priorities shouldn’t be used to implement dependencies between operations, use the corresponding dependencies method to add and remove them.

An operation object in the operation queue starts getting executed when it’s added to the queue, and finishing it doesn’t mean that it was performed to completion. An operation can be cancelled before it’s executed while it leaves the object in the queue, it notifies the object to stop the task as quickly as possible.

Operations and Operation Queues are thread-safe, and can be called from any thread regardless.

We’ll try to implement the same image loading task for which we used Dispatch Queues previously, so we’re able to see how different the API is.

  1. Defining an Operation (by subclassing the Operation class)
  1. Creating an Operation object
  1. Adding the operation object to an operation queue

Our operation subclass, ImageOperation has a main method which is called when an operation instance is added to the operation queue, where we’ve implemented our image loading task. We see that we call isCancelled twice, this is because we need to ensure the operation is not cancelled before initiating tasks that take time to process, and is considered a good practice when subclassing Operation.

The rest of the stuff that happens is quite straight-forward.

We have now seen how and why concurrency is important when crafting software, and the various methods we could employ to implement concurrency in iOS. iOS takes care of a lot of it under the hood, and exposes a high-level API that’s both easy to use and very robust.

Here’s a playground with these implementations we’ve discussed:

I hope this was informative and I’m hoping to write more frequently, feel free to leave comments about what you think and if you like me to talk about something specific.