6 Characteristics That Make APIs Fit for Application Integration

Olga Annenko know-how

what makes a good api

There is hardly an IT application nowadays that doesn’t provide an API, which specifies how this application should interact with the rest of IT ecosystem. So, it’s no wonder that IT staff even at large enterprises increasingly use APIs to integrate multiple systems with each other, usually new ones with the existing IT estate.

At the same time, there are APIs and then there are APIs. Some of them make integration a breeze while others turn it into a nightmare for integration specialists.

As providers of a cloud-based integration platform, we deal with all kinds of APIs – excellent ones, good ones, not so good ones and just plain awful APIs. That is why we’ve decided to share with software providers a few guidelines about how an API should look like to be a perfect fit for integration projects (and therefore, make your product popular with developers and end customers).

Using APIs for integrating Cloud, IoT and Mobile

Most software and mobile applications nowadays have APIs, some of them to allow you to easily find the best restaurant in a new town, some of them to feed company’s logistics software with the latest information about the current location of goods. APIs are widely used both in B2C and B2B scenarios.

Even more than that, enterprise systems increasingly start supporting APIs as enterprise IT infrastructure is getting more and more interconnected. In the business world, APIs have become the driving force behind continuous and automated data exchange between different cloud-based and on-premise applications, systems, databases and even platforms.

The main reason behind that is that no business application is used as a standalone solution.
If your company uses a customer service system like Zendesk or Help Scout, you would need to integrate it with the CRM system of your choice such as Salesforce or SugarCRM if you want to enable your support to immediately react to priority customers’ queries and issues.

The range of APIs implementation can go far beyond SaaS-to-SaaS integrations, though; In more complex digital transformation scenarios, it is quite a common practice to use APIs to connect IoT platforms to a lightweight integration middleware like iPaaS, and then connect the latter to a legacy ESB.

It shouldn’t come as a surprise then that APIs are gaining popularity with large enterprises, who use the application programming interface to share data with their partners and suppliers, and even to explore new market opportunities like in the case with IoT or Mobile.

Two methods to integrate with the help of API

Before I get to summing up characteristics of an API that would make integration a breeze, let’s quickly get through the basics of integration via APIs.

Basically, the mechanics of application and data integration is different depending on the quality of an API. You can either actively fetch data by polling an API or let an API send you data. Both methods have their pros and cons.

When you poll an API, you are in charge of data flow. If you need more data, you just specify that. If you know you have already too much of it so that it already got stuck somewhere along the way, you just decrease the amount of data you receive per request. This is important because being in control of your data flow is highly relevant for a better performance of your integration as well as for guaranteeing that the data won’t get lost in case your storage is full.

The most obvious disadvantage of this method is that it can fail to deliver data in a real-time mode. When you poll an API, you need to define clearly how often this will happen. Theoretically, you can poll it every other second, but most good APIs would impose limitations in terms of how often you’re allowed to do that and how much information you’re allowed to fetch in one go. So, let’s say you can schedule polling every three minutes. If data is processed faster than it is received, then you are bound to have a delay in data processing.

The other method involves setting up a webhook trigger. In this case, an API would send data to webhook triggers on its own while they would just sit there and wait for it. It can be considered as an ideal solution because then it doesn’t really matter how the API looks like. All you need is for the sending system to deliver data as soon as it is changed.

Another advantage is that you would have very little to no delay between data receiving and data processing, because it will be automatically processed as soon as it arrives.

But the drawback of this method is, unlike polling an API, the lack of control of the data flow. The sending system will just keep delivering the same amount of data over and over again as soon as it will get confirmation that the previous batch was indeed delivered and e.g. stored in the message queue. There will be no way of limiting the amount of incoming data in case of an emergency.

This is what makes a good API: 6 characteristics to pay attention to

An API that is the best fit for integration purposes is actually suitable for both methods described above. Sometimes, though, one is preferred to the other. And while the second method is more or less universal (if you remember, with webhook triggers, we don’t care about the quality of API), it is the first method that reveals if an API is good or not.

So, what makes a good API for integration and why are they so important?


Modification timestamps/ Search by criteria:
A good API should allow to search data by certain criteria, most importantly by its date. Simply because after the first initial data synchronisation, it is typically the changes that we are mostly interested in. In other words, we need the changed (updated, deleted, corrected, etc.) or added information since the last time we triggered the synchronisation.

So, when a trigger should poll data from API, the first important question to answer is how to detect changes in data.

The only way to get this data is to start asking for changes since particular timestamp. For example, the first, initial data synchronisation happened on May 01, 2016. So, we specify this date as a point of reference, and set up to request for data that has been coming in since this date. This is why it is important that in addition to search by criteria option, an API also provides timestamps.


Naturally, it can happen that there are huge amounts of data, even if this is only the changed data. In order to deal with it efficiently, we need to have a way to specify that we need not all the changed data in one sitting, but, say, only the first “page” of it. For example, of the size of one thousand data records.

This is what paging is about. A good API must be able to limit the amount of data that can be received in one go, as well as the frequency of requests for data. It should also be able to notify about how many “pages” of the data are left.


Paging can only work, though, when data is ordered, because if it’s not, it is impossible to know whether you’ve already received that data or not. Therefore, an API should also be able to allow to sort data at least according to the time of modification.

So, having all these three characteristics allows us to define that we only need the data that came in after May 01, 2016, and only the first “page” of it, in our example, one thousand data records, ordered by time of change. Then we’ll remember what was the changed timestamp of the last record in this one-thousand records list, and then ask for next batch of data only starting from that moment. The mechanism for remembering timestamps can vary; we, for one, use snapshots for that.

Having said all that, if you use the OData protocol (preferably the latest version 4) to create APIs, then they will possess all the characteristics mentioned above by default, as well as some other, quite important ones like upserting an entity or conflict resolution (to avoid duplication of data).

[clickToTweet tweet=”If an #API provides timestamps, paging & sorting, it really makes integration a breeze. #programming” quote=”If an API provides timestamps, paging and sorting, it really makes integration a breeze.”] 4

JSON support / REST:
To be fair, an API doesn’t have to be RESTful in order to be considered good. However, most new APIs are REST APIs that, by default, support JSON, and there are quite a few good reasons for that.

REST APIs are stateless, which is what makes a good API case for applications that require a considerable amount of back-and-forth messaging, e.g. for mobile apps. If an upload to a mobile application is interrupted due to, say, loss of reception, REST APIs make it very easy to retry the process. With SOAP, this is possible too, but with considerably more effort. In addition to that, REST APIs are lightweight and more compatible with the web as they use simple URIs for communication.

REST APIs support various formats, with JSON being only one of them; TXT, CSV and XML would be other examples. This means that you as a developer has a choice between different formats (as opposed to SOAP that supports only XML), and can go for the one that really fits the purpose.

Using JSON with REST is considered to be best practice, though. Mainly, because unlike XML, JSON’s syntax is very close to most programming languages, which makes it very easy to parse it in almost any language. Not to mention that JSON is also really easy to create.


Authorization via OAuth:
OAuth is what makes a good API case, too. It is an open standard for authorization, and even though it is considered by some developers to be a pain in the …hm.. neck, OAuth provides considerably better usability for the application users/developers than any other method.

Contrary to widespread beliefs, OAuth isn’t equal to signing up with your Facebook or Twitter account; OAuth means that if application users (e.g. Xero users) want to connect this application to another one via some integration services, they can authenticate themselves by explicitly granting this service access to the application – no more, no less.

Unlike granting access with, for example, an API key, though, the OAuth authorization is considerably faster – you just need to click on a button confirming access grant. Any other method means that users of your API have to actually make an extra effort – not great for delivering superb user experience.


Good documentation:
It is last, but not least.
It seems like providing a good, extensive documentation for an API is something that should be understood by developers by default, you don’t need to point that out. Yet there is a massive amount of APIs that are extremely poorly described, even those APIs that are actually very good for integration.

Therefore, we can’t stress this enough how providing a solid documentation for API is important for integration projects, because it is one of the factors that drive the decrease in project implementation time, and hence, in costs for the project. And a good documentation surely does add up popularity to an API;-)

If you have other points to add to this list of characteristics of what makes a good API and makes it great for integration, please do share them below. We’d love to learn about them!

Looking for an integration partner for your application?

Request Live Product Tour

About the Author

Olga Annenko


Olga Annenko is a tech enthusiast and marketing professional. She loves to write about data and application integration, API economy, cloud technology, and how all that can be combined to drive companies' digital transformation.

You might want to check out also these posts