Data integration best practices: Connection points. Web Server API vs Database

Kevin Mobbs integration best practices

Connection points.Web server API vs database

In the previous article, we went through various techniques to detect data changes in systems. This is essential if we want to have updates to data records as soon as they occur so that we, for example, send orders to an address that is actually the current one. As mentioned before, in order for the data exchange to happen, there must be a way to connect to a system in the first place. Here, there is a difference between on-premises and cloud-based systems.

Cloud-based software as a service providers usually only expose APIs on the web server, through which you can interact with a database. This way they can ensure with higher level of certainty that API behavior doesn’t change as the product develops, and enforce the relevant business rules. Plus, it is considerably easier to build a secure mechanism into an API and protect the database simply by not giving any access to it.

Then there are some custom-built or specific purpose applications that only expose database connections – but not the API.

Last but not least, an on-premises system would often include both a web server that exposes an API and a corresponding database. And indeed, since this application resides on-premises, it is easier to ensure that the same level of security standards applies to the API as well as the database. Which means that in the case of on-premises systems, both API and database may provide connection opportunities.

Now let’s review their pros and cons.

Web Server API

Exposing a web server API is often the only way for web-based SaaS.

Pros of this strategy:

  • Enforces the largest set of business rules
  • Provides the highest level of abstraction
  • Enables the most granular permissioning
  • Can have mechanisms to handle demand spikes
  • The chances that a published web server API will work after one or two years are quite high – or at least you will be notified when the changes are made

Cons of this strategy:

  • This mechanism builds on top of HTTP(S) which is not exactly reliable, because an HTTP(S) call may fail. It is also pretty verbose, meaning that it requires more data to be transmitted than necessary
  • The set of operations you can do through a web server API is rather limited compared to the set of all operations that are possible
  • Has performance overhead

Database Views/Database Tables

Pros of this strategy:

  • Can be more performant
  • Often database expertise is more plentiful than application expertise
  • Often results in fewer bugs, because there are simply less layers of logic you’re going through. Plus, it is simple to the extent that it’s hard not to understand it correctly
  • Can be used when web server APIs don’t exist or are otherwise not very good

It is also important to note that within this arrangement, database views provide more abstraction than database tables. To understand this, imagine you have a contacts table and an address table as separate tables. In order to read data from them, you have to know that there is a contacts table and an address table in the first place. Then you would need to know how they are related or do the join(s) yourself. Whereas with the database views, this might be several tables combined and presented as one.

Cons of this strategy:

  • Most of the time, this is not an official API. Hence, it is subject to unexpected changes during version upgrades, which might result in a disruption of an integration flow

Admittedly, this was quite a short section. But in order to move to the next one, we felt that we needed to cover these “basic”.

The next section will be about yet another integration challenge related to technical mechanics. We will go through the techniques to effectively maintain and keep up-to-date the data that is “duplicated” across several various systems. Stay tuned by following us on Twitter and LinkedIn!


About the Author
Avatar für Kevin Mobbs

Kevin Mobbs

Kevin has a background in Molecular Microbiology research and technological product development. Following a brief academic research career Kevin first experienced product development with the succesful startup Genera Technologies. This was then followed by seven-year’s experience as a Product Manager with QIAGEN and Lonza in the biotech devices industry, over 5 years successful innovation consulting for crowd-sourcing SaaS pioneers InnoCentive as well as the world’s largest re-insurer, Munich RE. Kevin is an advocate of design thinking methods and enabling ‘dumb questions ‘ to be asked and answered. ‘If something cannot be made fun, then it may not be worth doing!’


You might want to check out also these posts