Conversations On: Trustworthy AI
Hello. My name is Martin Walsh, I’m Daon’s chief legal counsel, and today, I’m speaking with Louise McCormack, who is Daon’s AI expert. Lovely to speak with you all, and I’m looking forward to having a great discussion with Louise. Louise, before we get into the topic of AI, would you mind if I asked you to introduce yourself and tell us a little bit about your efforts in AI, please? Sure. So, I’ve worked across a number of different sectors over the past ten years, primarily in digital transformation and growth optimization, which involved implementing AI, chat bots using NLP in regulated sectors as far back as 2018. I’m currently undertaking a PhD in trustworthy AI evaluation at the University of Galway and the ADAPT Centre, here in Dublin. Okay. That’s great, Louise. Thank you. So AI has been making lots and lots of headlines recently, but it’s been around in our industry for years. The new European AI Act is a significant change. How do you see it shaping businesses going forward? The AI Act focuses on ensuring that organizations are mitigating risks to things like health, safety, fundamental rights, and it arrives at a time where it will do two different things. Firstly, it’s going to regulate the existing technology that, as you say, has been around for years – machine learning algorithms that are widely used for things in financial services such as risk evaluation or the traditional machine learning models that are used by companies like Daon for things like fraud detection and verification. These algorithms are typically already applied in regulated or risk-adverse industries. So they already have many safety controls in place, and the AI Act will make those organizations more accountable. They will have to produce more documentation for their models. The biggest two things that I see happening for these types of traditional machine learning, industries are, firstly, they’ll have to do a better job at version control of algorithms because the AI Act requires documentation for each model, proof of testing of the performance metrics, for example. So insurance companies who currently change their models on a daily or weekly basis in a way that’s almost like dynamic will have to reconsider their approach. And secondly, there’ll be an increased focus on bias and discriminations in the model. So currently, it’s often acceptable just to show that a sensitive attribute like a person’s race or gender wasn’t given directly to an algorithm, but because we know machine learning models can infer those protected characteristics, the AI Act requires a more active approach to showing that the models aren’t biased, which is a pretty significant change. For the newer AI, by this I mean anything powered by a large language model – an LLM like Chat GPT – These are the ones that we’re seeing in the headlines – there will be a much more significant change under the AI Act. Okay. So to be clear, for anyone listening, as you know, Martin, we do not provide AI systems that use this type of technology in data. Yes. Important point. And the reason for this is simple because it’s not needed. Traditional machine learning is more suitable. It’s more consistent, and most importantly, it’s reproducible, which you need in this industry. But because there is this explosion of new AI products, where companies are often building things on the application layer. So, they’re building software that’s at the back end powered by things like Gemini or, you know, JWT. Because those organizations have modified the AI systems that that they’re powered on, even though they didn’t develop, you know, Gemini for example, because they’ve modified that. They’re basically the AI provider for a new type of technology, and they’re going to have to produce significant documentation and work with the providers of the large language models to show that the models are safe, that they’re not biased, and that’s going to be quite difficult. Currently, there’s a risk reward ratio to deploying these types of LLMs, And key decision makers seem to have favored, I guess, the reward for the past few years. They’ve taken kind of chances in deploying some of these technologies, and that risk is starting to kind of change. And with the AI Act coming in, people are changing the weighting of that. And I think we can expect to see a trend of prioritizing risk mitigation over the potential rewards associated with that newer, more kind of higher risk, technology. And, you know, I think we can expect to see that more and more as we get closer to August next year when the bulk of the AI Act will come into force. Obviously, a huge amount of thought and work and discussion has gone into the drafting and the creation of this European bill. Do you see it as being capable of having an impact outside Europe? Do you think it could set precedence around the around the globe perhaps? It’s a I guess it’s a difficult question to answer right now, but, you know, because there’s a lot of changes happening in the world. But when we look back at GDPR, that did significantly impact how people process data globally. The EU is typically seen as at the forefront of AI regulation, and so I think people in this space should be looking at what’s happening in Europe. And there’s a lot of comparison to be seen between the AI Act and the GDPR, regulation that came in. Similar to the California Consumer Privacy Act, it took inspiration from GDPR. We can expect adaptations of the act to come into force globally, whether that’s in, a couple of years or whether that’s in, you know, six, eight years. It won’t be the exact same. The strictness of GDPR wasn’t necessarily emulated globally, but the concept would. The implementation of that legislation fundamentally changed how we all think and how we use data in organizations around the world. And the goal of the organizations when they went to, be compliant with the GDPR was to just be compliant. But what actually happened as a result of the legislation is we’ve seen a reeducation of every single person who works with data. We’ve seen that everybody has changed how we think about data. We consciously or unconsciously now have started to incorporate concepts like privacy by design. It’s a proactive approach of embedding these data protection principles and safeguards into systems from the outset. And this is happening, like, it’s in the fabric of organizations right now to think of privacy by design. And what the AI Act is likely to do is to cause a similar change in how we think about AI, not just privacy by design, not just security by design, but other ethical considerations like bias, like human oversight and transparency. It’s going to bring about this mindset shift of ethics by design, and I think it will change how we design and deploy AI systems and kind of force us to introduce ethical principles throughout the life cycle. So we’re developing trustworthy AI systems that meet the standards of the legislation. And I suppose even in Europe, it’s a very new act. We’re just figuring it out. Right? It’s just it’s just at the very start. You can expect it seems to me, you can expect it to have a profound effect in Europe. And I suppose what you’re saying is it’s likely to be influential throughout the rest of the world, but we have to see how things go as it’s kind of reviewed and enforced and so on and so forth and as we learn more about it. Now you mentioned and you use the term trustworthy AI. Sounds it’s a nice phrase. Sounds like a buzzword. But in practical terms, what does it mean for us or for other businesses, and why do you think it should be prioritized? Yeah. The phrase “trustworthy AI”, it’s included in the AI Act, and it’s in industry people are starting to use words like “responsible AI” or “ethical AI”. Although we’ve seen a shift to people now adopting the phrase “trustworthy AI” within the industry also, it’s generally used to refer to the seven principles published by the EU high-level expert group on AI in 2019. So these are ethical principles that stipulate what trustworthy or ethical AI actually looks like. It’s a high-level framework. The principles, some of which we mentioned things like transparency, human oversight, non-bias and fairness, environmental, societal well-being, accountability, data privacy, technical robustness and safety. They’re like seven principles, and they’re core concepts of what we should be doing in order to make trustworthy technology. And besides prioritizing it because it’s the right thing to do, organizations that adhere to these principles will have a head start on the upcoming legislation, that’s coming globally in this area. It’s trust with AI is essential for ethics by design. In six years’ time, ethics by design, I believe, will be as common practice as privacy and security by design have become since stronger data protection regulations were enforced globally over the past six years. Okay. Alright. Thanks for that, answer. Maybe just changing tack a little bit. It’s one of the areas I I often scratch my head about. There are a lot of misconceptions about AI. For instance, some people would say high-risk AI only applies to certain businesses, or they might say that trustworthy AI only matters for high-risk systems. Is there truth to that, or how do you see it? And I think there is a misconception that high-risk AI only applies in certain sectors or that trustworthy AI only matters for high-risk systems. For those based outside of the EU, under the EU AI Act high risk, it doesn’t refer to organizational or security risks. It focuses on risks to safety, health, or fundamental rights. And what this means is that a simple machine learning model used for something like credit evaluation is considered high risk because it affects an individual’s right to access credit. So trustworthy AI is relevant not just to this, but also how we design and deploy any AI technology. Just as the concept of, like, privacy by design is essential for any company handling personal data, trust for the AI and ethics by design are valuable principles for any industry building AI-powered products, which is, you know, we can expect to be almost every company. So even these so-called limited risk AI systems, like biometric of authentication, for example, or financial fraud detection, both of these are explicitly listed within the AI Act as limited risk. But if we incorporate the principles into the design of those types of things, there are benefits to doing that, not just for compliance. So at Dayeon, for example, those are two products that are used here. And although they are limited risk, embracing trustworthy AI in our technology, it helps to guide not just compliant, but safer and future-proof systems, that are better products. When you adopt this mindset of continuous improvement and oversight for products, it just improves the technology. So it would take all day to explain how each principle is embedded into products in Daon, for example, but just to provide some sort of tangible example. AI systems can be designed to continuously retrain themselves. And when we do this, it introduces additional risks into that system, like data poisoning, model drift. Okay. These risks mean that we would need to have much more human oversight, more ongoing monitoring in order to ensure that the product remains safe over time. So understanding this risk and having a trustworthy AI framework, that forced us to ask the question, do we need as a company to be using continuous retraining? And in our case, we decided we don’t want our technology to do this. There’s little benefit, and it introduces this unnecessary risk. The alternative is to train AI models in batches, have controlled versions of models, and they’re released like software updates. They have regression testing and, you know, version control. They have the option to roll back. Mhmm. So to answer your question, I guess, it doesn’t matter if our products are deemed limited risk. If we adhere to higher standards and if we bring these principles into our organizations, there’s an opportunity for us to really think smarter about how we’re developing technology, and that can have really, really big positive impacts to the products that we’re developing and not just help us meet compliance kind of requirements. So not just approaching it from the perspective of doing the bare minimum? Yeah. Just having an extra having an extra perspective and an extra framework and way of thinking about, how we’re building technology offers this opportunity to kind of innovate just a little bit better. Okay. Really interesting, really interesting answer. So, let me ask you a short question, which has a hugely probably complicated answer. But what should businesses know to ensure whether they’re complying with, the expected or existing standards in this area? Well, it’s a really big question. Compliance varies by use case. So the type of AI system under the AI Act, it’s most significantly impacted by what your role is in relation to the technology. So if you are the AI provider, meaning you placed that technology on the market or, you know, you have, built it and placed it on the market, you have a lot of compliance requirements to meet, particularly if it’s classed as a high-risk technology, which, again, can simply mean an algorithm that’s used for something simple like an insurance quote. Mhmm. If you are the deployer of the AI, you also still have obligations with their far lesser. And you can lean on the AI provider to assist you with documentation for the model that they sell you or the product they sell you. Yeah. As a first step, I would suggest cataloging every piece of AI technology that your organization uses, whether that’s something proprietary or from a third party and establishing what your role is in relation to that technology. And keeping in mind that if you buy a third-party AI and you sell it under your own name, you are the provider. If you buy a third-party AI system and you make a substantial modification that affects its performance in a substantial way, you are the AI provider. And once you establish that, you can go and see what the, requirements that you need to adhere to are for that system. Okay. So even changes or tweaks, as you’re talking about, are covered in the act and have an impact and have to be considered, and as you say, can result in you being the provider. That’s correct? Yeah. That’s correct. So there are a couple of nuances in and around this, but, essentially, the role that you that you hold in the race in relation to the technology is very important, and there are a couple of nuances in the act around how that role can change. Okay. So it might be the case that you haven’t built a piece of technology, but if you have adapted it, that can actually put you on the hook as the AI provider. So it’s very, very important to go in and actually establish exactly what your role is for that, piece of technology. Okay. Okay. Great. Appreciate that little nuance there, an important one. Can I ask you about another aspect of the act? It’s a common assumption. It relates to human oversight. We hear people talking about this over all the time. Does this mean humans must make every key decision or be involved in every key decision? How does the act define human oversight? And, you know, is this whole concept of humans being involved in everything related to AI a myth or is it is it true? Yeah. There are a lot of, misconceptions around that. And the lack of understanding in various sectors around what human oversight in AI looks like has the potential to be very costly for organizations. Human oversight doesn’t mean a person must approve every AI-driven decision. Oversight is about ensuring human agency, autonomy, and the monitoring of AI systems. So sufficient oversight, it will vary by context. So take, for example, something we talked about earlier, credit evaluation model. So Yeah. Some companies might take the approach of deciding to have humans, humans to review AI-generated summaries or, recommendations for various, you know, credit applications. But if a person sitting in these, you know, human in the loop centers that are already being established mostly in developing countries, if they’re sitting there rubber stamping a hundred applications per hour, that’s not genuine human oversight. It’s kind of meaningless and it might not stand up to auditors. Yes. So oversight involves not just having a human in the loop. It’s about actually making sure the AI outputs are accurate, that they’re understandable, that they’re monitored over time, and having proper checks for bias and fairness and accuracy. So human oversight can’t be a box-ticking exercise. It’s about carefully deciding for your use case where and how to use AI and to ensure that people remain in control and have good oversight of that technology. For that use case, for example, if you took an ethics by design approach, you would likely redesign the entire system. You might decide to entirely avoid using AI for the initial stage of the credit application process Mhmm. And just use a rule-based decision. We used to use rule-based decisions all the time. And because we have this new technology, a lot of the time, companies are just bringing it in. When you bring in that new technology, you’re going to have to bring in the proper oversight for it. So if we revert back to some of the older stuff like rule-based, it’s not it doesn’t fall under the AI Act. So what you could do is put a rule-based approach for applications that gets rid of thirty percent of applications, and that’s thirty percent applications that need human oversight. You could have some portion of that application automatically approved. You could have a machine learning model that automatically approves a certain amount of people, and all you would need then is, oversight to make sure that that automated approval is working, that you have a process to audit, check it. If somebody requires, you know, a request of the application that you will process for humans to go in and make sure that that’s been done fairly. So it really does depend on the sector. Sure. You definitely don’t need to have a human reviewing every single thing. It’s about just kind of being smart and making sure that you’re limiting how much AI you bring in, not using it unnecessarily, and that when you bring it in, you do it in a way that comes with the least risk, and you have oversight for wherever those risks are. So hugely valuable to have to put some thought in right up front as opposed to designing a system where you’re plugging in a human all of the way along and doing essentially a box-ticking exercise that really just doesn’t have the same value. Absolutely. If you knew what was coming with GDPR a couple of year years before GDPR came in, if you really knew how it would be, you wouldn’t have the situation where people are going through data project after data project after data project. We know what’s coming with the AI Act, and we can start to onboard some of those, principles, like the ethical principles. And if we do it now, it will save us having to rebuild, the technology and reimagine the technology to fit compliance standards. We can design them now in a way that means, you know, in two years’ time or three years’ time when, you know, these things are all really figured out that we’ve kind of gotten ahead of it. And I think it’s a real opportunity to save money right now by understanding Sure. Sure. Saving money. Yeah. Important as well. And projects, absolutely important. Okay. That’s a really interesting explanation. Very useful. Very Practical, useful answer. Just picking up on one of the topics there that you mentioned, ethics. Is it fair to say if an AI system is compliant with the EU Act, does that automatically make it ethical? Well, the act refers to the trust for the AI principles as non-legally binding principles. So, in order to be ethical, you would want to be fully adhering to those principles, and that ethical framework, which developers, you know, can strive to achieve as a high-level goal, but only some parts of those principles are explicitly legally enforced. So I guess the answer is no. You would have to kind of stick to the ethical framework as much as possible. But what legally enforceable is not the full framework. So can I just as a follow on then for that, maybe to help explain it a little bit more, What’s the difference then between legal compliance and ethical AI? Could you give me some examples maybe? Sure. So, AI systems that manipulate or target vulnerable groups in in a way that’s likely to cause them significant harm are explicitly forbidden under article five of the AI Act, and that’s enforced second February two twenty twenty-five. So that’s pretty much enforced now. And article five breaches, they incur the highest level of penalties, so up to seven percent of annual revenue. So that’s going to be tightly contested by organizations that are trying to be in breach of any part of article five. What meets the definition of significant harm for any use case, will be established over time by the courts. It’s not really provided in the act itself. So if the act bans, you know, significant harm, then it could be the case that modern man harm could turn out to be legal. But that doesn’t necessarily mean that it’s ethical. So I guess how all of this is going to pan out will really be established in the courts, but, certainly, it is the case that the act does not require you to be fully ethical. It just really restricts and, you know, kind of encourages people to do things, that they should be doing. Okay. Okay. Just picking up on the topic that we’ve kind of touched on a few items there, you mentioned data protection legislation. What would you see are some of the changes for data protection under the act versus what we already have in place? So similar to system security, existing practices for data protection essentially need to be extended to make sure they consider and are fit for purpose for AI. So, there are certain things in AI products that are unique versus traditional software. So in traditional machine learning, we look at things like data augmentation, feature selection, weightings. In LLMs, large language models, we have to look closely at things like data leakage, data privacy, consent. Essentially, the first thing is that AI-specific risks need to be included in the existing governance strategies. And the second thing is a slight shift in prioritization. Currently, there’s a huge focus by companies who want to do the right thing on privacy by design, the minimization of data collection, never ever sharing data. However, data governance is just one of the seven trustworthy AI principles. And Okay. Even just introducing one extra principle, for example, bias and discrimination, it introduces an additional task of assuring that AI systems are not discriminating. This means testing them on ongoing basis for bias. So just because an AI provider sells a model to a customer, the original metrics that they provide to the customer for, you know, accuracy and bias of the model, they may not be valid over time as the data going into the model will change. Okay. Deployer of that system, they have to ensure that it’s being tested for bias and performance on an ongoing basis because the inputs will change over time. Now, typically, what companies will do is they’ll send back a sample, maybe one percent, of data to the person who builds it to ensure this. But when you’re in a climate where we’ve really been, you know, told data, data, data, protection data, don’t give data, don’t share data. Sure. When you’re starting to ask then, oh, well, data needs to come back in for evaluation of bias, for example. Well, that’s in direct conflict with the one principle that we’ve really been, you know, told to consider. So that’s going to be a real challenge to get people to say, okay. We know about security. We know about privacy, but now we need to consider things like, you know, transparency, accountability, you know, bias. And it’s going to be hard to inject those principles, in conjunction with the things that we’re already doing. And not just for companies like Daon who are developing technology, but also customers as well, it’s there’s going to have to be, I suppose, a shift in terms of how they how they look at data because it’s, as you say, it was don’t share it, don’t use it, don’t do this, that, and the other. And now, it’s one of these seven principles, but data itself is going to be needed to help. You’re going to have to put it to use in a way under the new act. Is that fair? Yeah. Yeah. That’s fair to say. It’s that we have to now consider, like, the bias of the data. This data essentially powers the models. So if the data is biased, the model is biased. And so we now need to be checking for things like that. It’s not just about securing it. It’s not just about having consent. It’s now about ensuring that we’re using it in a way that’s fair. Okay. Okay. Very interesting. Just moving on, some would argue, Louise, that ethical governance in AI or indeed in technology development itself stifles innovation. What’s your perspective on this? I think that AI that, you know, the innovation that’s not rooted in Essex is ultimately not sustainable or good. I think there’s a huge opportunity presented and a and a and a map to develop technology. In Europe, you know, we’ve read the competitive the competitiveness reports, around AI innovation in Europe. However, I don’t think it takes into account the opportunity. So we’ve seen in the last ten years when people start to build products that are customer orientated, that are human-centric, that technology has done really, really well. So this is a road map to develop technology with humans at the core. So it’s a road map to develop technology that has the potential to be adopted and accepted and integrated. And, you know, if people wanted to see that as a, you know, a blocker, they can see it as a blocker. But I think, ultimately, it’s a road map to build really good human-centric technology that has the power to be leading. You know, for building technology in Europe that has humans at the center of it. Mhmm. We’re building competitive products that can be adopted globally. I think it’s a huge opportunity, and it’s a huge roadmap into how to do this. And if people can have a kind of a mindset shift and kind of get on board with that, I think the people who do that will be the people who are going to do well out of this. Why do you think then that the misconception persists? Because it does, doesn’t it? I think this misconception is like anything, you know, mindset and perspective is something people hold. You know, in business, there are some people who immediately think, you know, win. They think let’s do deals that benefits everybody, and those people do quite well in business. And then there are organizations where, you know, they believe or people that believe in order for somebody to do, you know, for them to do really well that somebody else has to lose. People think in different ways about things. Sure. You know, and some people will just try to see this as a as a barrier and just go for the minimum compliance requirements. And then I think those people who see this as an opportunity will do quite well out of it. Yeah. I think I think, just different types of mindset that I think, you know, different people will do differently based on what mindset they take when they engage with us. Sure. Sure. Okay. Appreciate your insight there. To wrap up, can trustworthy AI eliminate all risks, do you think? Or are there always limitations or advice that we should keep in mind? Yeah. Unfortunately, we can’t really eliminate all risks, I guess, through any framework. The limitations to keep in mind would be to, vary by use case or technology. Generally, anything that’s not reproducible or can’t be properly explained, that’s going to bring challenges, you know, and risk for AI governance. I would suggest to not over engineer solutions, to use risk evaluation as a key factor in preparing business cases for any projects involving AI. So any technology that you’re thinking about building, look at the risks that this kind of approach is bringing. I would say to include stakeholder input. So ask internal stakeholders and also the end user of whatever it is that you’re planning to deploy, what they think of your system before you build it, and you always test not just for performance, but also testing for bias throughout the AI life cycle in both the data and models. Louise, thank you so much, for your time in answering these questions, none of which I would have thought was particularly straightforward, and you did a fantastic job. So we really appreciate that. And we thank the listener for paying attention through this and hopefully enjoying it and finding something useful in it. And if you have any questions, please do not hesitate to come back to us at Daon. Thank you very much.




