Real-time Flows for Real-time Data
As you probably know, our platform is microservices-based, which means that each integration component is placed inside its very own Docker container. Previously, when you wanted to execute a task consisting of, say, two components with a Data Mapper
in between, our platform would start each corresponding container (in our case three in total) one after the other in order to pass data through the first container to the last container, and only after stopping the previous one. This technique proved to take quite a long time for your task to start running – up to 15-20 seconds.
So, we decided to improve this procedure by introducing our Real-time Flows
on demand. In these flows, containers start and don’t stop, while being «glued» together via our messaging queue. Thanks to this, data passes through all of them basically at the speed of light, with flow executions taking only between 100 and 500 milliseconds
, depending on the real-time data volume.
If you would like to have Real-time Flows activated for your project, please contact our support or sales team
Introducing Real-time Flows brings us to the next new feature:
Introducing the Request-Reply message exchange pattern (opened in beta)
So far, our webhooks were used only to receive data for processing on our platform. As soon as we received your data, these webhooks used to reply immediately with a “Thank you” message, which meant as much as “We received your data”, while their processing would happen later. We basically used this asynchronous Fire and Forget method.
Now you can make your webhooks reply with the data produced inside your tasks
by placing “HTTP Reply
” component inside your task. Combined with Real-time Flows, this speeds up the data processing massively.
With this new feature now in place, you can even build your own APIs on top of the elastic.io integration platform as a service.
If you would like to have Request-Reply message exchange pattern activated for your project, please contact our support or sales team
Step-by-Step Execution improvements
At the same, time we continue working on our new feature introduced in the March Feature Alert, namely the Step-by-Step Execution.
Its performance is now sped up by the whole 50 per cent.
Thanks to the improved process of packaging of trigger and action components, the step-by-step execution feature is working for you now even faster than before.
Now you can see your logs already during the set-up process for your integration flows. If you remember, with the Step-by-Step Execution feature, you can execute each individual step of your integration flow in the designer window with your own real-time data. Previously, though, if anything went not quite as expected, you would have to get back to your dashboard in order to see the logs.
With our latest improvement, you get the logs straight in the designer window
while still setting up the flow. This way you can track down any possible errors and fix them immediately.
We also improved the way how you get your real-time data samples displayed
in the designer window.
If you have already tried out the step-by-step execution, you have noticed that you would see your data sample only mouse-click, in some kind of a tooltip. This sufficed if you had a short data sample, but it proved to be quite cumbersome for long data samples. Now all your data samples are displayed in neat windows with scrolling option.
Last but not least, now you can expand and collapse task windows
in the designer to keep a clear overview.
What to expect in May
For May we planned to release one Grand Feature, namely introduce Organizations
. This new feature will include common billing, tasks sharing, components sharing, credentials sharing and role-based access control within one organization, meaning you can define what users can have access to this or that information. Stay tuned;-)
Try the new features now
Request Live Product Tour
It’s mid-May and therefore, it’s high time for our April Feature Alert. As hinted last time, we have a new awesome feature to introduce. Please meet: