[Title slide 1. Blue CAI company logo with tagline “We power the possible” appears in middle of screen. Company website www.cai.io appears at the bottom center of the screen] [Title slide 2. Multi-color background with text centered in the middle of the screen that reads: “Virtual Event: The 3 Cs of Intelligent Automation: The convergence of AI and cognitive decision-making”. The white CAI company logo appears underneath of this text towards the bottom of the screen] [Two speakers appear on screen. Christina Kucek, CAI, is on the left and Christian Ventrigila, CAI is on the right.] 00:00:09 - 00:00:54 Christina Kucek Hello, and welcome to our latest session in a new series of 30 minute CAI learning sessions, the 3 C's of Intelligent Automation. Why do we call it 3 C's of automation? First, the series is brought to you by CAI. We're a global technology services company with a 40-year history of combining our dual strengths of talent and technology to deliver lasting results across the public and commercial sectors. The second 2 Cs, represent your hosts for the series. I'm Christina Kucek, Executive Director of Intelligent Automation at CAI. This is my colleague, Christian Ventriglia, UiPath Architect at CAI and UiPath MVP. 00:00:55 - 00:01:34 Christian Ventriglia The purpose of the learning series is to get under the covers on everything intelligent automation, practical use cases for automation, and technology advancements that drive efficiency and increase productivity. Sit back, grab a beverage, and learn how to hyper charge your automation team with tips and tricks from our expert guests. If you have questions, we encourage you to ask them in the chat. We'll try to get to all your questions as they come up during the session. If at any time you want to learn more, visit our website at cai.io for articles, client success stories, or to set up a discussion with someone on our team. 00:01:35 - 00:02:05 Christina Welcome everyone to today's automation learning session event, The convergence of AI and cognitive decision making. My name is Christina Kucek. Briefly about me, I'm passionate about assisting clients in their automation journey from building automation teams for RPA and document extraction, to machine learning and artificial intelligence. Our solutions drive efficiency, cost savings, and a competitive advantage. With me is my co-host Christian Ventriglia. 00:02:06 - 00:02:39 Christian My RPA journey began nearly 6 years ago with a request from my manager to research and emerging technology called Robotic Process Automation. Fast-forward to today, and I'm a USN certified RPA solution architect with a passion for helping clients drive innovation using intelligent automation. I have a demonstrated history of delivering solutions across various business units, and I'm currently passionate about integrating machine learning and AI capabilities into the digital robotic workforce. 00:02:40 - 00:02:56 Christina All right, let's get started. In today's 30-minute discussion, we're going to talk about where AI and cognitive decision making converge with Ben Taylor and Christopher Zumberge. Let's go ahead and introduce them. [Another speaker appears on screen. Ben Taylor, Rainbird, is below Christina and Christian in the center of the screen.] 00:02:57 - 00:03:26 Christian We'll start with Ben. He's the Chief Technology Officer at Rainbird. He's the driving force behind the fusion of human expertise and automated decision making. A former Adobe computer scientist, an active member of the All-Party Parliamentary Group on Artificial Intelligence. Ben is passionate about solving complex challenges with innovative technologies. Before Rainbird, he led an award-winning team that revolutionized the motor insurance industry. Welcome, Ben. 00:03:27 Christina Welcome. [The fourth and final speaker, Christopher Zumberge, CAI, appears on screen and is left of Ben.] 00:03:28 - 00:03:53 Christian Next we have Chris. Chris is the Executive Director of Technology Services at CAI and he has been designing and delivering AI solutions for CAI over the past 5 years. In addition to helping drive and execute on CAI's overall AI strategy, one of his areas of focus is on the ethical use of publicly available machine learning solutions. Thanks so much for joining us today, Chris. 00:03:54 - 00:04:17 Christina Thanks Chris, and thank you all for being here with us today. We only have 30 minutes, so let's jump right in. Ben, we've heard of cognitive programming, which is basically an attempt to have computers mimic the way human brains work. I've heard Rainbird described as a cognitive decision making tool. Would you say this is accurate? 00:04:18 - 00:04:58 Ben Taylor Hey, first of all, thanks ever so much for having me. It's a pleasure to be here today to chat some of this stuff through with you. I think that's a broadly accurate description. I think the way we label these technologies is quite important. I think to understand Rainbird, Rainbird is an inference engine, which is really about synthesizing the way we as humans go about making high order decisions. By that, I mean the kind of logical decision making that we all take every single day, whether that's things like deciding what we're going to have for lunch or something in your domain of expertise, maybe you are a credit underwriter making lending decisions or a decision making diagnostic decisions. 00:04:59 - 00:05:41 Ben Now, when you think about all of those different kinds of decision making, what they really require is reasoning, which we tend to consider as being a kind of human process. The way Rainbird works is about taking a body of expertise, which is encoded through a process of knowledge engineering or extracted through large language models, and then applying the facts of our situation. That's data. That might be things like, what do I have in the fridge, or it might be what symptoms is the patient presenting and so on. What we do is we take that body of expertise that we encode into a model, we take that data and we apply reasoning to reach some kind of a judgment. 00:05:42 - 00:06:11 Ben It turns out that this kind of human process of reasoning can be reproduced mathematically. This isn't necessarily about trying to reproduce the biological processes that go on in our brains or building analogs for kind of neural processes that you sometimes see in other branches of AI. Rainbird is a reasoning engine that replicates the way humans go about reasoning to produce some kind of a judgment that you can take action on. In that sense, you can call this cognitive decision making. 00:06:12 - 00:06:26 Christina Awesome, thanks. Ben, with all the hype about LLMs, ChatGPT being the most famous, could you explain the difference between AI and the inference engine cognitive decision making that you were just describing? 00:06:27 - 00:07:05 Ben Oh yeah. There's a lot of hype at the moment, isn't there? It's a really interesting question, and a lot of this comes back to the way that we define things. AI is a field which is so prone to these kind of hype cycles of inflated expectations. I've been in this field for something like 25 years, and I think this is probably the third time I've been sitting with this kind of hype and excitement around what we're doing. AI think part of that comes from the fact that artificial intelligence is really a very broad church. It's a term that is poorly defined and actually describes a whole world of different kinds of technologies and actually on its own is not all that useful. 00:07:06 - 00:07:59 Ben A good way to look at this is through something which you may have heard the term, the AI effect or the AI paradox. It was really nicely summed up by John McCarthy. John McCarthy was one of the kind of founding fathers, I guess, of this whole domain. I'm going to butcher a quote from him. He said something along the lines of, "As soon as it works, we don't call it AI anymore." You know what? It's so true. When I first studied AI back in the 1990s, we would've called things like root planning that you use in your GPS to get you from A to B, we would have called that AI. That was a whole field of study. I don't think we call that AI now. AI is this kind of big broad set of different technologies of which things like inference engines and cognitive decision making are all AI technologies. They sit within this big umbrella term that encompasses machine learning and large language models and symbolic AI and all these other things. They kind of go hand in hand. 00:08:00 - 00:08:02 Christina Awesome, thanks. 00:08:03 - 00:08:15 Christian Ben, real quick, while we're talking about AI and LLMs, could you maybe nerd out a little bit with me and talk about how this relates to symbolic AI versus statistical AI? I know there's a difference between the two. 00:08:16 - 00:09:19 Ben Oh yeah, yeah. I'm always happy to nerd out on this stuff. I could talk for hours. We only have 30 minutes, so I'll try and be quick. Look, symbolic AI is a form of AI, which is really about taking structured models of the world that we can reason over. These are descriptions of what your experts know about the world. They include technologies like rules engines and knowledge graphs and those kinds of technologies. They require expertise. Those systems have to be explicitly programmed, but they don't necessarily rely on data to be able to go build those systems. When we look at these hype cycles that you've seen in AI, symbolic AI was very much part of what was called the kind of last AI summer back in the kind of 1990s. Alongside symbolic AI, you have statistical AI, which is really very often that's about machine learning, that's about using technologies that start from data to produce some kind of an algorithm that can make a prediction or find patterns in those data, but without being explicitly programmed to do so. 00:09:20 - 00:10:07 Ben Of course, just like AI is a big broad umbrella term, machine learning is also a big broad umbrella term encompasses lots of different technologies. The real thing to I guess to understand about machine learning is that it's a system that makes predictions. It's looking at statistical likelihood really. It needs data, but those predictions don't necessarily come with transparency. It can be very difficult to understand the why behind why a machine learning prediction has been made. We would argue that prediction is not the same as judgment. Judgment requires a degree of human-like reasoning. Really that's what symbolic AI gets you. It's only really by bringing these 2 fields together, that you get to leverage the kind of superpower that sits within. 00:10:08 - 00:10:12 Christian Ben, you're using LLMs and Chat GPT to power Rainbird, how does that work? 00:10:13 - 00:11:00 Ben Yeah, I describe these 2 different worlds of symbolic AI and statistical AI. Rainbird is, you can think of Rainbird as being a neuro-symbolic AI platform. We build structured representations of the world, which may start from a human expert sitting down and building explicit encodings of what they know about the world, along with all of the nuance and ambiguity that comes with the way that we as humans represent our domain and talk about what we know about the world. We're currently also using large language models to extract those knowledge graphs from unstructured data. In effect, that means you're able to build Rainbird powered, fully glass box explainable, transparent, symbolic solutions from documentational processes that you already hold. That gives you a model which can then be trusted and refined by your human experts. 00:11:01 - 00:11:30 Ben You can think it was a bit like an intermediary model between a large language model and your ability to go and deploy that safely, which is not subject to the kind of hallucination, which is another word for lying, that a lot of these large language models kind of suffer from. You have models that you own that are open to your own introspection, that are transparent, that you can safely connect your data without the concerns that you might have about data privacy, about what's happening with those data, and then deploy in a safe and explainable way. 00:11:31 - 00:11:45 Christian That must affect the speed too then. We're able to use those large language models to kind of get us a more meaningful model that we can speak back to via Rainbird. Using those LLMs to generate a model which we then can later use in Rainbird, is that a fair assessment? 00:11:46 - 00:12:06 Ben That's a very fair assessment assessment. Yes, that's exactly what this is doing. That thing that you can then go and use later that you've built that's in Rainbird, well, that's yours, that can run in Rainbird, can run extremely efficiently, extremely fast with none of the security or privacy or lack of explainability concerns that you would have if you were just using GPT or something similar on its own. 00:12:07 - 00:12:08 Christian Sounds like the best of both worlds. 00:12:08 - 00:12:26 Christina Okay, so wait, that's awesome. I want to get Chris into the conversation because he's actually had hands-on experience building solutions using Rainbird. Do I have to be a Python super user in order to even get started with this tool is my first question? 00:12:27 - 00:13:18 Christopher Yeah, no, not at all. Even Ben just said it, right, which is take the expert, sit them down, and have them build the model. That is not hyperbole at all. We keep talking about Rainbird being a tool to capture knowledge. The way they do it is something called a knowledge map. Those are just graphs that measure relationships in a process. Creating those graphs, all it requires is the domain expert, the subject matter expert, the people who can explain the decisions that are being made in the first place. They have this no code interface essentially that was designed to actually reduce the friction between the people who hold that expertise, and then actually the process that's being encoded in the map. It's kind of amazing you can sit down with them and let them graph their thought process in real time, create the solution right there, and then even run it and it'll probably get you at least about 80% of the way. 00:13:19 - 00:13:53 Christopher It is also a very powerful XML based backend. That allows you to encode the nuances of complex relationships such as mutually exclusive relationships or deductive logic like syllogism. If A and B are true, then C must be true. That's just the kind of knowledge that you would have just from being embedded in a process for years and kind of understanding those relationships. The vast majority of people are never even going to need to touch that. They can build and deploy it just right from the no-code UI. 00:13:54 - 00:14:04 Christina I love that. I mean, just being able to communicate effectively and have the stakeholders understand is 90% of the battle, honestly, in most of these complex solutions. 00:14:05 - 00:14:28 Christopher That's not something that we could really do with the statistical models. I mean, even that first part of building statistical model is what they call feature engineering, and that's data scientists sitting down and combing through hundreds of thousands, if not millions of rows of logic, trying to figure out how to bring individual pieces of data in, which is a very different experience than having your decision making expert decide the process. 00:14:29 - 00:15:07 Ben Yeah. We sometimes talk about explainability of these things in 2 different ways. We talk about explainability at the micro level. When you make a decision, why did Rainbird give you that decision, and you've got complete transparency over that decision. You can also think about macro explainability, which is what you get from the fact that a subject matter expert has been part of this process and that these tools are no code. You can look at this model, you don't need training in Rainbird. You can look at the model and understand the basis on which it's going to be making its decision. You kind of get macro explainability at the same time as that kind of micro individual decision level explainability, which you won't get with a machine learning model on its own. 00:15:08 - 00:15:27 Christina It's really amazing. Gone to the days where I want to build any solution that the business has to call IT in order to maintain even nuanced changes. I'm done with that conversation altogether. I want them to have the power in their own hands to understand the tool, understand why they're making the decisions, and even take action on it without having to call IT, if I'm being honest. 00:15:28 - 00:16:02 Christopher I mean even more than that, I mean, as someone who delivers machine learning solutions, it's very stressful. You could be delivering a solution that makes a difference in someone's life. There's a lot of weight on being able to bring those into production, but there's a lot of value to being able to say, "Hey, this model was built by your people you already trusted to make those decisions. We worked with them, we validated it. We are really just replicating that knowledge at a higher scale." That really takes a lot of the weight off of being able to put these into production and trust that they're making these good decisions. 00:16:03 - 00:16:14 Christina I can imagine UAT would be a total game changer too. I mean, when they have that level of understanding, understanding how to test it and feel confident about it, should be a lot easier as well. 00:16:15 - 00:17:01 Christopher Yeah, exactly. That goes back to what Ben was saying about transparency, being able to see how the decision is made in the first place. It's so difficult with these symbolic AIs, especially when you're going through tons and tons and tons of test data trying to say, "All right, well these decisions were wrong. Let me try to see if I can untangle where that went wrong." UAT for symbolic models, your subject matter expert can look at the output and then trace back to where original decision was made. You don't want things to go wrong, but it happens. You can kind of say, "Oh, you know what? I made a false assumption here," or, "I mislabeled a relationship here." Make that tweak, run it again, ensure that those right answers are coming out. To your point, you're not having to go back to IT to just throw tons of new data at model again to see if it can retrain appropriately. You have that expert making the tweaks, making the investigation. 00:17:02 - 00:17:03 Ben Yeah. 00:17:04 - 00:17:18 Christina Yeah. When you get into production with real production data, that's when the weirdness shakes out, and that's when you catch that you've missed a decision. Instead of going back to the drawing board, this is awesome, that's really clicking with me. Thanks for sharing that. 00:17:19 - 00:17:29 Christian I feel like that kind of answered my next question actually. For Chris, what would you really say that you're most impressed with the Rainbird tool? I kind of have a feeling I know the answer. 00:17:30 - 00:18:29 Christopher Yeah, no, I mean, we keep talking about the explainable AI and that really comes in a couple different flavors. We talk about transparency. A model is transparent if I can both explain and justify how the parameters were determined from training data, how decisions are produced from the testing data, and how the model was created in the first place. The second one is we talk about interpretability, which was another path that we went down here where a model is considered interpretable if it's decision making process and underlying bias is something that I can explain. Explainability is kind of the collection of all these features. For the most part, in a statistical AI, I don't have those things. I don't have a full understanding of how the parameters are determined. I mean, we look at transformer models, so that was the big thing with Google brought out Bert, which is a big transformer model, and now GPT being based on these things. These are millions and billions of parameters. I can't explain every single one of those. 00:18:30 - 00:19:18 Christopher It really just, I go back to that when you're deploying AI that makes differences in people's lives, you have to be so careful about how you're creating it and making sure that you take the responsibility on when you deliver those. You need to provide a basis for justifying decisions, tracking them, verifying them, and especially exploring new facts. As the world changes, as the data changes, ensuring that you're able to keep up with those decisions and continue making the correct decisions because the first set of new data that comes in that you classify wrong, that could affect someone. That's why I talk about the value prop of Rainbird and being able to bridge the, they call it the POC, the production gap, the ability to bring it live and say, "Hey, I have faith in this. Let's do it." 00:19:17 - 00:20:06 Ben Yeah, how many machine learning models are sitting in a Jupyter Notebook on some data scientist's laptop somewhere, which can never be deployed even though you might get super high accuracy but can never be deployed for exactly those kind of reasons. When we look at explainability, one important thing which I think is often missed particularly when trying to deploy just LLM models, is an understanding of what explainability means for the end user and what's important for the end user in explaining a model. Sometimes you might be able to bring out some statistical analysis of the way an LLM model's behaving, but that's not sufficient particularly exactly as you described, Chris, in those critical environments, to be able to give the kind of explainability that allows you to get trust in those systems and bring them to production, which is why you end up with this gap between proof of concept and production. 00:20:01 - 00:20:22 Christina Awesome. I can see where we could use Rainbird very easily and a lot of UiPath solutions. Christian, can I just throw this one at you? How could you imagine using Rainbird in some of the automations that we're building? 00:20:23 - 00:20:52 Christian Yeah, excuse me. Absolutely. Really what Rainbird gives us is intelligent decision making. When we're using robots, and I always tell our customers this too, the robot because of everyone always thinks Terminator and the robot's going to go rogue, and that's never the case. The robot, they're dumb, is what it comes down to. There's no real element of intelligence there. We have to tell the robot all these different conditions and how to handle them. A decision intrigue might grow and grow and grow. 00:20:53 - 00:21:53 Christian When we're dealing with complex decision making, I always go back to things like maybe fraud detection or getting approved for a credit card. The SME, like Chris was talking about earlier, those folks know the process intimately. If we can sit down and build out that decision map in Rainbird, and now with the connector that's available in the marketplace, in the UiPath marketplace, once we have that model kind of deployed, it's as easy as dragging in that connector and then it's going to return basically, "Hey, this is the question that we're asking it," and then the Rainbird connector's going to return the decision that it made, and I believe why it made that decision and to a level of confidence. That gives us a whole nother level of being able to implement intelligent decision making in what was previously just a dumb robot kind of making the decisions that are how the programmer is telling them to. It really adds a new layer of intelligence. I think there's a lot of value to that. 00:21:54 - 00:22:21 Christina Yeah, I mean, we've been putting humans in the loop when we need a major decision made. I can imagine because of the explainability and the transparency, we could put Rainbird in instead of the human. It just speeds up that process. Now, Chris just wrote an article on hyper automation that I loved. If you haven't checked it out, we'll put a link in the chat. I'm wondering if you had Rainbird in mind when you wrote that article. 00:22:22 - 00:22:50 Christopher Yeah, no, absolutely. I mean, Christian just said it, right? The point of hyper automation is how do we elevate from RPA, which is just moving things around, to essentially putting the brain in place that can say, "Move this. Now move this. Now move this," but then a high priority item comes in and it's like, "All right, pause, everything else. This thing needs to be moved through," and free up resources to make sure the high priority item gets processed correctly while then moving all the other things around so the process never stops, right? 00:22:51 - 00:22:52 Christina Yeah. 00:22:53 - 00:23:09 Christopher Rainbird is a perfect technology for this because what we're looking for is complex decision making. It's not only that, it's complex decision making that we can review and fix if necessary, and trace those decisions that are being made even outside of standard working hours. 00:23:10 - 00:24:00 Christopher One of the big use cases is employee onboarding. There's a lot of pieces. You've got integrations with several third party systems, data coming from all different places, including human submitted and it's completely asynchronous, but at the same time, it's also strictly ordered. Things have to happen in a certain set of steps. On top of that, it's a high visibility process. For a new employee, it could be some of their first interactions with you as a company, and you want that to go smoothly. In this case, Rainbird is perfect because if something does go wrong, you need to be able to take a decision, trace it back to where something brought you to that wrong decision, and then tweak it so you don't have that experience in the future. I think that's especially valuable for larger companies who might be onboarding dozens of employees a week, if not a day. 00:24:01 - 00:24:44 Christopher We talked about one of the hardest things about maintaining these statistical models is that processing large amount of data, it's hard to filter down and narrow down on the unexpected decisions being made. It's a lot of data to go through. It's a lot to validate, especially if you don't have anything telling you which input parameters brought you from here to there. Essentially what you're going to end up doing is just throwing tons and tons of test data at the model, again, retraining it and hoping that you got a statistically higher decision. There's a lot of money being thrown at that, the MLOps side of the house. Yeah, it's difficult. It's very difficult. 00:24:45 - 00:25:27 Christina You just got my wheels turning because I feel like I get smarter every time I talk to you guys. I'm just thinking of all these use cases like medical claims, medical, all kinds of benefits. This thing that the provider does, "Does this fit within my benefits policy?" Banking, "Do I qualify for this loan?" All these things, and they have to be auditable, explainable, like iron clad. When we're talking about a statistical model, like 90, 87%, are we feeling comfortable with that? "I didn't get my loan because the model said 87% I was a high risk." All these things, there's just so much potential here. It's very exciting. 00:25:28 - 00:25:52 Ben Yeah. Yeah. Imagine healthcare administration approval of prior authorization of medical claims that needs a clinician to go and to go and make that final judgment. If you want to go and automate that, you need to be able to explain to the clinician why you are approving or denying that claim so they can look at that and go, "Yes, that's right." You can't do that if you are just taking a purely statistical approach. 00:25:53 - 00:25:59 Christina Right. When you've got a sick kid, you don't want to wait for that approval. You want it to be as fast as humanly possible and efficient as humanly possible. 00:26:00 Ben That's it. 00:26:01 - 00:26:03 Christina That's fantastic. 00:26:04 - 00:26:13 Christian Just to bring it back to UiPath a little bit, because I'm a little bit of a UiPath fanboy, but Ben, can you tell us a little bit about the vision for when Rainbird started your partnership with the UiPath? 00:26:14 - 00:26:44 Ben Yeah, sure. I mean, you guys have hit it really. We saw so many of our customers using RPA tools. I'm going to pick on UiPath, but this is just as true for other vendors out there, for Blue Prism and others. Where people are trying to build decisioning systems by tying together a set of branching decision logic, lots of if, this, then that kind of processes. If they want to be able to get that traceability through the process, that's the only option that they have. 00:26:45 - 00:27:21 Ben First of all, that puts distance between the people who understand that decision and that process of encoding because they're not going to be sitting in UiPath studio building it up themselves. Also, if something changes, you end up with this big spaghetti nightmare that you have to manually propagate that change through the thing. It becomes a hugely frictional and expensive process and doesn't deal with all of the nuance and ambiguity that there really is about how people really go about solving problems and making decisions that are ultimately judgments in the real world. We sought a real opportunity for this separation of concerns between process and decision. 00:27:22 - 00:27:58 Ben UiPath, and Chris, you hit it on the head, it's about the brain. UiPath is going off and getting the data that's required and automating the more linear parts of the process, and then passing those data out to something like Rainbird that can go and make a decision that can come back to UiPath with the judgments that it made. It's so complimentary. It just absolutely made sense that we would work together. That's why, as you quite rightly say, we're now live in the UiPath marketplace. You can go build your Rainbird model and then just drop in a connection in the middle of your process and then bang there you go. 00:27:59 - 00:28:26 Christian That's so awesome. It's really exciting for UiPath and other UiPath developers as well who want that AI technology available and like you said, that drag and drop ability once you have that. Our Rainbird model built is really, really great. Here's my frustration, Ben, excuse me. I hear about a cool new piece of technology and I think it might be a great fit for my organization, but really, how do I even get started with Rainbird? What's the first step? What are the costs look like? 00:28:27 - 00:28:47 Ben Well I mean as with most projects, the first step is always going to be find a use case. How are you going to use this technology? Where are you going to start with this? We have workshops, we have thinking tools that we can deploy. We have lots of experience in helping people uncover where value for these kinds of technologies are and how you might surface, kind of the place to start. 00:28:48 - 00:29:21 Ben Really the next step is just to build. Rainbird is such an easy tool to go and get started with. We have a training academy, it's all online, to help people get up to speed with it, but it's very natural environment. People already think in the way that Rainbird is designed. It's very easy to pick Rainbird up and to start building with it. We encourage people to iterate and go and start building with Rainbow Bird just as quickly as you can. You can use our tools, you can use our large language model powered tools and our blueprints and things to help you get started, but most people pick up Rainbird and within very little time they're starting building and starting to plan these models. 00:29:22 - 00:29:40 Christina Yeah. Our intelligent automation team would love to get our hands and do more work like Chris's team has been doing with Rainbird. I'm really excited to introduce this to as many clients as it makes sense, to take the human out of the loop and add some artificial intelligence solutions. 00:29:41 - 00:30:00 Christian We've had a lot of success with that kind of self-driven training, so it's really exciting to learn that Rainbird offers that as well. Well, unfortunately, I believe our time is over for today. It's been an absolute pleasure chatting with all of you, especially Ben and Chris. Thanks so much for your time. I'd also like to thank our audience for your attention and participation. 00:30:01 - 00:30:31 Christina We hope you'll take away these 3 key lessons from today. Number one, you don't have to be a Python superstar to get started using AI and hyper automation thanks to tools like Rainbird. Two, Rainbird is an inference engine that provides the context of human intelligence and wisdom to AI solutions. Three, thanks to the partnership of Rainbird and CAI, we can hyper automate processes in new and complex ways. 00:30:32 - 00:30:59 Christian Excellent summary, Christina. For everyone else later, we will be sending everyone that attended a recording of this event to share with your colleagues or peers. In the meantime, if we didn't get to your question and you're interested in learning a bit more about CAI Intelligent Automation Solutions or know someone that is, please visit our website at cai.io and fill out our contact form. Or you can even contact one of our team members via LinkedIn. Thank you and have a great rest of your day. 00:31:00 - 00:31:15 Christina Yes, thanks everyone for joining. Stay tuned for more details about our next event in the learning series coming soon. [Closing slide 1. Blue CAI company logo with tagline “We power the possible” appears in middle of screen. Company website www.cai.io appears at the bottom center of the screen]? 

Transcript

Skip past transcript

Contact us

Want to learn more about CAI’s Intelligent Automation practice? We’d love to talk.

All fields marked with * are required.

Please correct all errors below.
Please agree to our terms and conditions to continue.

For information about our collection and use of your personal information, our privacy and security practices and your data protection rights, please see our privacy policy and corresponding cookie policy.