Batch versus Real-Time Differences

Batch processes integrate all of the data matching a given criteria into the target system in one large process.  Real-time processes monitor the source system for new transactions and respond immediately to a new transaction by integrating the transaction with the target system transaction-by-transaction, in real-time.  Batch processes take a great deal of processing power and a long time to complete, since they must processes all records that meet the integration criteria at once.  Speed ratings (transactions per second for  instance) are critical for batch processes because they can run for a long time to process all records.  You may need to ensure that the integration completes in an acceptable amount of time each time it is run.  Real-time processes monitor and integrate data one transaction at a time as they appear.  This means that teal-time integration processes are always ‘caught-up’, so the transactions per second capabilities are of much less concern for these processes.

More …
 

Batch versus Real-Time – What is the best choice?

There is no methodology that is better than another in all cases.  In some cases, it makes sense to automate integration via batch processes, and in other cases real-time integration makes more sense.  A mature and capable integration solution will allow for each integration map to be integrated as you see fit.  A typical implementation will see a mixture of maps automated as batch processes and real-time processes.

Batch is best when …

Batch should be considered when:

 The data being integrated is not time-sensitive

 If the data you are integrating has little impact from being hours or days old, then batch integration could be a good choice.

 You have limited integration between systems and you need to be sure that all changes from many areas are kept in synch with as few processes as possible.

 If you are unable or unwilling to create data maps for all of the individual elements requiring integration, a batch processes that integrates many fields could save time and get the integration process up and running sooner.

 You have to integrate a large number of data elements, but not very many transactions (records).

 One of the pitfalls of a batch integration is the amount of time it takes to process large source records sets.  if the number of transactions occurring is minimal, then batch integration could be a good choice for integration.

The data be in integrated is closely coupled with other data that is being updated on a batch basis.

If you are integrating data that is closely coupled to data that is being updated via a batch process, then it only makes sense that the closely coupled data should be updated via a batch process as well.  For instance, if you are using an ERP system and integrating forecast orders (that are updated via a batch process) to another system, there may be no advantage to monitor the forecast orders in real-time since they are only updated with each ERP batch run.  In this case, it might be more efficient to set up a batch integration that is triggered just after the forecast orders are updated.

You don’t mind a little manual intervention

Batch processes can be automated just like real-time processes can be, but batch processes have a catch.  If a special situation arises where a refresh of the data is required that falls out of the normal batch processing times, then the batch synchronization will need to be triggered manually.  This can be a big deal if the person with all of the system smarts is not available or if the batch takes an excessively long time to complete.  This situation doesn’t come up every day, but it always comes up eventually when dealing with batch processes.

You have the necessary power to run large batch processes

Batch processes can take a long time to run and take up a lot of resources.  Since these processes usually have a time window in which they need to complete, adding more processes can make for a time crunch or a processing power crunch.  Before choosing to integrate systems using batch processes, be sure that you have excess capacity and/or time to allow the integration to complete in an acceptable amount of time.

Real-Time is best when …

Real-time integration should be considered when:

 The data being integrated is time-sensitive

 If the data you are integrating has an impact if it is hours old, then it might be good to choose real-time integration to ensure that the data is always as current as possible.

 You have the development expertise and time required to create integration maps that describe exactly what needs to be integrated, without resorting to integrating large collections of data when only specific data within  those sets is required.

 It’s ideal to be able to integrate only the data that is required to be integrated.  Sometimes a shortage of development time or expertise leads to more data being integrated than is necessary.  If you have the time and skill to create exacting integration maps, then you can create a greater number of real-time integration processes rather than a smaller number of batch processes.

 You have to integrate many transactions (records).

 If you are integrating systems with high transactional throughput, then real-time integration can be a better choice because it is always caught-up with processing.  This averts the problem of excessively long batch runs that can hog machine resources and interfere with other processes.

The data be in integrated is loosely coupled with other data that is being updated on a batch basis.

Most environments have a mix of data that is updated in real-time and data that is updated via batch processes.  If the data you are integrating is not tied to a batch processes, then you are free to choose real-time integration to integrate it with other systems as  efficiently as possible.

You want to avoid manual intervention

Real-time processes are always caught-up, so there is rarely a need for manual intervention to get them caught-up or synchronized with other activities.  Real-time integrations tend to “just work” and require the least amount of manual attention.

You do not have the necessary power to run large batch processes

Batch processes can take a long time to run and take up a lot of resources.  Since these processes usually have a time window in which they need to complete, adding more processes can make for a time crunch or a processing power crunch.  If you do not have the excess capacity and/or time to allow a batch integration to complete in an acceptable amount of time then you should choose real-time integration.  Real-time integration is much less processor intensive, and does not require a time window to complete.

A Note on THE LINK®

THE LINK allows all maps to be deployed as either a batch process or a real-time process.  There is no change in the map itself.  This is simply a deploy-time decision.  Batch maps can be scheduled.  Real-time maps can be associated with up-time and down-time schedules so that they do not run at inappropriate times

Efficient Cloud Integration

[POSTED April 16, 2018]

Cloud integration is changing

Cloud based systems such as d365 and others demand a new paradigm for how make them work best with your other enterprise software.  Reading tables and writing to files to facilitate your information isn’t good enough anymore.  New systems demand new strategies that leverage local reading and callback servers.  The local reading ensures that the system is being read without using network bandwidth or polling, and proper use of callback servers means that the data is transmitted across the network efficiently, without polling or unnecessary network traffic.

Are you still using file-based methodologies to move data in and out of the cloud?  Are you polling or flooding your network with unnecessary traffic?

THE LINK® with cloud-based monitoring and local callback server technology may be your answer to efficient cloud integration.

 

More …

 

Integration should be a snap, not a development process.

[POSTED March 12, 2018]
    How much time does it take you to build a simple integration map? If the answer is more than an hour then perhaps it is time to upgrade your integration software. Building a map should be as simple as mapping fields or invoking an API and nothing more. If you are building forms, controls, error trapping, logging, reporting, etc., then you’re probably wondering why your “simple” integration takes so much effort?
    Upgrade to an integration suite that magnifies your productivity and keeps you focused on business rules, not custom development.

    More …

    The Link® lets you target complex API’s of any type by allowing you to control the API construction and population of the required fields. Then you can test your link and deploy it; all without ever having to write a user interface or user controls. With built-in error trapping, logging, reporting, monitoring, review and transaction retry capabilities, all you have worry about is your business rules and nothing else; meaning that integration is a snap. You are up and running with just your business rules in place, and no other code development effort required.

ERP Integration with batch-based processes is dying!

[POSTED February 27, 2018]
    The old way of importing manually or through schedules is being replaced by automated transactional integration which synchronizes and replicates data in real-time, enabling state of the art planning, scheduling, capacity planning, MES, and supply chain systems to work dynamically and always be up to date. Production activities process in real-time enabling better reporting, resulting in more current, better informed reporting and decision making.
    Stuck in the 1980’s? Embrace the best practices of real-time transactional integration today.

    More …
    (The old way. Timers, batches, lots waiting)

    (The new way. Real-time transactional synchronization. All systems are up-to-date – Always)

Server Based Integrations are dying!

[POSTED February 20, 2018]
    Using centralized applications for integration is rapidly becoming an outdated methodology due to its inability to stand up to the features, speed, and flexibility of distributed microservices. It used to be that integrating data between systems could be accomplished by integration applications that ran on a centralized server, requiring that all data, all rules, and all horsepower be dedicated to that centralized server. This centralized (or monolithic) integration methodology is being supplanted with small, nimble, distributed microservices.

    More …

    Unlike web services distributed microservices do not need to be called upon to perform work. They work silently and automatically in the background at all times. They wait for their integration trigger to occur (a new or changed file, database record, website, API call, etc.) and as soon as the trigger occurs they apply the integration map (to transform and map the data) in real-time. These real-time service maps are deployed as discrete maps that are only concerned with mapping a single endpoint collection to another endpoint collection (think file to table, or table to API method or API method to xml, etc.) As new integration maps are needed, they are created using a design tool and then deployed as individual Windows services.The advantages to using microservices for integration over a traditional monolithic integration server are many, including:

    1. Services are tiny in size and tiny in memory use, requiring few resources. No need for a massive integration server with a ton of memory and expensive processors.
    2. With no need for centralizing data, services can be deployed anywhere. Deploy them close to the data they are monitoring, or close to data target system, or use two services (one to monitor and one to write) to ensure that everything has as little latency as possible.
    3. Change control becomes more isolated and more flexible since changes can be made to individual integration maps without affecting other deployed maps.

    An enterprise-class microservices integration suite such as: The Link® also includes all of the goodies that you expect with any integration solution:

    1. Centralized management (reporting, monitoring, error tracking, failed transaction retry, scheduling, etc.)
    2. Support for centralized or distributed development and deployment of service maps.
    3. Automated transaction logging, process logging, error logging, etc.
    4. Tight integration with Visual Studio to let you work in your IDE of choice, Visual Studio, as well as support for your existing classes, dlls, APIs, etc. Leverage your existing knowledge and existing code.

    Missing Link Technology developed The Link® , a powerful and flexible integration suite. The Link® allows add-on applications to communicate with an unlimited number of other systems such as ERP, Time and attendance, shop floor systems, and more. Using open standards, The Link® supports reading and writing: tables (any ODBC-compliant database), views, delimited files, xml files, web services, web APIs, .NET APIs, and more. Endlessly extensible through tight integration with Visual Studio, The Link® has the power and the flexibility to integrate to all your systems.

     

    The end result is more sales! The value proposition for The Link® and your customers is a real-time integration solution offering to facilitate integration between many systems including well-established ERP systems that any customer or partner can take advantage of. Also customers will see dramatically improved speed to solution with built-in error trapping, error reporting, transaction logging, monitoring, management, and reporting tools. The Link® allows you to provide world-class real-time integration solutions for your customers without the need for specific knowledge of each ERP or other enterprise systems.

     

    (The old way. Everything in one single, monolithic server. All eggs in one basket)

    (The new way. Small, agile, individual Windows services; each running a single integration map)