ACORD LAH – eiConsole IDE

     

    eiConsole for ACORD LAH – TXLife Demo

    Welcome to the PilotFish eiConsole for ACORD LAH (Life Annuity Health) demo. Today we’ll be walking through using the eiConsole (shown here) to build an ACORD 103 New Business Submission Interface

    Route File Management

    When we first start the eiConsole, we’re shown this route file management screen. This will show us our currently selected working directory or project folder, as well as any interfaces or routes actually configured within there. Of course, we can always get started by building a new interface by browsing our PilotFish Interface Exchange (PIE). The PilotFish Interface Exchange (PIE) is an interface repository where you can search for and download pre-configured interfaces and templates. This is a great way to get started building a new interface. We can browse through a template, download it, just tweak and modify it to go. 

    Automated Interface Assembly Line

    We’ll continue the demo by just walking through this pre-configured interface, which is an app entry (source system) to a new business (target system). We’ll open this up and get our main eiConsole screen shown here. Each of these 7 stages you see (represented by the columns on this table) are one of the steps in our assembly line approach to building interfaces. 

    Each row on the left-hand side, though we only have a single row, and each row on the right-hand side, represent the source systems and target systems. Source systems are going to be places where we are receiving data “from” and target systems are places where we’re sending data “to”. 

    Defining the Source for the ACORD TXLife Interface

    We’ll start with our first stage and move left-to-right configuring and describing each stage as we go. Our first stage is a source system. This is just a place to provide a name and optionally an icon and metadata to best describe what it is we’re connecting to. This, along with the target system, is going to act as documentation so we can just glance at an interface and get a feel for what it’s doing. 

    Select the Interface Listener from Built-in Options or Build Your Own

    We’ll then proceed to our listener. The role of the listener is to decide how we will actually connect a source system. You do that by choosing one of these listeners in this drop-down or a dialog here. If you can think of a way to send or receive data, we probably have a listener built-in and if we don’t any point you see drop-downs or dialogues like this are points of extension so we can write our own modules in Java or .NET. 

    Once you’ve selected a listener type, you’ll just need to configure it down here in this configuration tab and various fields. For this RESTful web service, we’re just going to provide the service name of the various lists of resources that we actually support. 

    Processing the ACORD Source Data

    Each source system can then have a number of processors associated with it. Processors are low-level operations on data – things like decryption, decompression, and character conversion – allowing us to modify or slightly work with the data before it actually proceeds to the further stages. 

    Each source system has a source transform associated with it. The job of the source transform is to take whatever that particular source system gave us and convert it to a canonical representation of the data we want to process in this interface. We do this in two steps. 

    The first step is a transformation module to take any non-XML format and convert it to XML. This could be anything like an ACORD AL3, CSV, delimited and fixed-width files like COBOL copybooks or other formats (JSON, Microsoft Excel, name-value pairs) or we could just accept the XML as is. Here we’ve chosen the JSON transformer to take the JSON we received from a web service and convert it to XML. 

    The next step of our source transformation is a logical transformation built using our data mapper. We’ll come back to this because the transformation for our target is a little bit more interesting. 

    Routing the ACORD Data

    After we’ve completed our source transform stage, we’ll continue on to the routing stage here in the center. Now the jump of the routing stage in the middle is two-fold. First, in the routing rules tab, we can configure which target system a given message is going to go to. 

    Now we only have one target here, so we’ve chosen all targets. We could set up arbitrarily complex expressions writing on the contents or metadata about a message so that we only go to particular targets a message has. We can also use this stage to configure transaction monitors.  These are going to be our proactive error notification and alerts if anything goes wrong or unusual in the processing of our interface. 

    Transforming the Data for the Target

    When our message reaches one of the targets, we first go through a target transformation. Like the source transformation, this happens in two steps. A logical transformation over here using our data mapper and an optional transformation module on the right. We’ll open up our data mapper now and take a look at the mapping we have to go from a canonical format to an ACORD 103. 

    ACORD TXLife 3-Pane Data Mapping

    Here we have the data mapper. This is a 3-pane mapping tool we use to build all of our logical transformations. On the left-hand side, we’ve loaded in our source format. In this case, it’s a canonical representing a new business submission. On the right-hand side, we’ve loaded in our target system. Here we’ve read the actual ACORD TXLife definition. 

    As we browse through these different fields and elements, we’ll see the ACORD descriptions and definitions are inline in the tool as well as any values like ACORD type codes.  We can search through and browse those and that’ll allow us to work with ACORD without needing to refer to external documentation.  

    In the center, we have a tree representing our actual mapping logic. We will build our mapping by dragging and dropping from our source and target formats into this tree here in the center. We can augment those mappings with these structures up here. Here we can find rules for flow control, like loops and conditions, as well as the ability to interact with XPath and XSLT functions, callouts and anything else we need to augment that mapping as we build it out. 

    Working with the data mapper is basically just drag & drop. We can find particular elements in here. For example, we could take this transaction execution date and delete it.  If we wanted to remap it, we can simply go over here on the right. You’ll notice that unlike other year fields, it doesn’t have a checkmark. Therefore, we know what is and isn’t used just by looking. We’ll drag & drop this element onto its parent, and then from here – we have several ways of populating it.

    We can use this palette up here. For example, we can do a call out to grab the current date or we could just grab it from our source system and drag & drop that onto that element. We’ll build our entire mapping, just by doing drag & drop. There’s no scripting or coding when you’re in the tool unless you just want to do it. 

    Now underneath, the mapping produces W3C compliant XSLT. XSLT is a web standard transformation language. It’s been around since 1999 and we do have a fully-featured editor here. If you wanted to make changes, we could add our own elements and attributes within here. Any elements or attributes you added would show up in the mapping view and vice-versa. 

    Testing the Interface or Route

    Now, in addition to being able to build out your mapping in the data mapper, we also have fully-featured testing. You can give it a sample file, push a button and see the output of the transformation. We have a debugger if you’d like to do it step-by-step and see exactly what’s happening at each stage of your transformation. 

    In terms of those source and target formats, we have a variety of ways of reading those in. 

    Using our format readers, we can read in the ACORD Life (LAH), P&C and RLC formats directly – including not only the schema files but the metadata files with type code definitions and other documentation information. We can read in from sample CSV files, web services or even databases directly. For all of these different Formats, reading and using these different format readers is going to provide the same level of information that we see for ACORD here.

    Also, for ACORD, we have built-in recognition of those type codes. When we’re looking at fields like a transaction type we can select those from an enumerated list. We have a tabular mapping tool to allow us to automatically and easily map between codes from one system and another. There are also built-in collaboration tools. We can go to any given field, go to our notes tab here and add our own notes. 

    The notes allow us to collaborate. They allow multiple users to share documentation information while they are working in the same mapping, as well as the ability to add comments within here, in the center. 

    That’s the data mapper at a 10,000 foot view and again, we’re going to use this for all of our logical transformations. While we’re using this to map a canonical to an ACORD 103, we could also use this to map from ACORD to a database, from some flat file to a completely different file format. Really whatever you want – it’s all going to be done in the same tool here. Now that we’ve completed our mapping, we’ll close our data mapper. 

    At this point, we received our message from our listener, we’ve transformed it to our canonical message. For this target, we’ve transformed it from the canonical to the ACORD 103. Next, we’ll continue on to our transport step. 

    Data Transport Configuration & Processing

    Now, the transport, like the listener, has processors. If we needed to encrypt or compress data before we sent it out, we can do that here. The transports, like the listeners, have a myriad of ways of sending data out. If you can think of a way to send or receive data, there’s probably a transport for it. Here we’re going to send it off to some HTTP endpoint. 

    After you’ve gone through left-to-right and you followed our assembly line approach to building your interface, you can do testing from right within the eiConsole. Simply go up to mode, select testing mode and pick where you want to start your test. Now from within testing mode, this first green arrow is going to represent where our test is currently set up to start. The blue arrows are all the places a message could go and these question marks just mean that for this test, nothing has happened yet. 

    We can start our test at our listeners, we can also load in sample files and we can save and load testing sessions from before – so we can build a battery of unit tests that we can apply later. Here we’re going to start a test at the source transform, using this JSON sample message. We’ll go ahead hit execute test and we’ll see our message move left-to-right as each stage successfully completes – we’ll get green checkmarks and for our failures, we’ll get a red X. 

    We can now follow our data along through the different stages. We can see the XML representation of our JSON message after we’ve transformed it. We can see the results of our transformation to our canonical, which for our test case here is empty. I’m going to go on to our target transform. We can see the results of ACORD 103 and then finally, we can see why our message failed at the transport – which in this case, we don’t have access to this server. 

    What we need to do is go back to editing mode, make any quick changes and swap between editing and testing until everything is working exactly how we’d like. Then we’ll return to that very first screen, route file management and decide if we wanted to deploy our interface. 

    Deploying the Interface

    Now there are a number of ways to deploy our interface. But the easiest is to simply go in under server view, provide connectivity information to your eiPlatform, hit connect and then simply drag & drop our interface from the top to the bottom, down here. 

    We can also share our interface out on that same PilotFish Interface Exchange (PIE) we potentially started with, as well as using our eiDashboard or a number of other methods to deploy and ultimately maintain the interface in production. 

    Connect Anything to Anything in ACORD

    That’s it! A tool to enable you to connect really Any System to Any Other System regardless of protocol or format but some special emphasis features and support for the ACORD LAH (Life Annuity & Health) and ACORD P&C (Property & Casualty) standards. 

    Try it for Yourself!

    We recommend downloading a Free 90-Day Trial of the eiConsole and taking a look at it yourself. More PilotFish videos demonstrating other software features are listed on this summary PilotFish Product Video page.  If you have any questions, don’t hesitate to ask.

    If you’re curious about the software features, free trial, or even a demo – we’re ready to answer any and all questions. Please call us at 860 632 9900 or click the button.

    This is a unique website which will require a more modern browser to work! Please upgrade today!