Aug. 1, 2025

From Workflows to Autonomy: Agentic AI and the Future of Human-Machine Collaboration

From Workflows to Autonomy: Agentic AI and the Future of Human-Machine Collaboration
The player is loading ...
From Workflows to Autonomy: Agentic AI and the Future of Human-Machine Collaboration

In this episode of How I Met Your Data, hosts Anjali, Junaid, and guest Jay Krish dive deep into the rapidly evolving world of Agentic AI—a paradigm shift from rule-following automation to systems capable of autonomous decision-making. Jay, a seasoned financial services leader and AI thinker, breaks down what Agentic AI really is: a network of large language models working together to reason, adapt, and act independently toward a goal.

Together, they explore:

  • 🧠 How Agentic AI differs from traditional automation and ML

  • 🚸 Why AI autonomy should be earned like parental trust—and the stages of building that maturity

  • ⚖️ The escalating risk matrix: Who owns the risk when AI goes rogue?

  • 🛡 Why human-in-the-loop design must persist, even in autonomous environments

  • 🌎 The unspoken costs: environmental impact and the power-hungry infrastructure behind AI innovation

  • 🔮 What the rise of agent-based systems means for the future workforce—and how to prepare for what's next

Jay offers practical advice on getting started, reframing fear into forward motion, and bringing ethical, human-centered thinking into the AI build process.

If you're curious about AI's next frontier, the risks and rewards of autonomous systems, and how to stay resilient in the face of transformation—this one's for you.

Anjali
[00.00.02]
Welcome back to how I Met Your Data. The show where we stop pretending data is simple and start getting real about what it takes to work with it. We've been in the data trenches, and now we're here as your companions in this ever evolving world where the community is as diverse as the stories we share, from hands on practitioners to the rapidly shifting tech, we're bringing you the insights, strategies, and sometimes chaos that shape the way data really works. Whether you're here for fresh ideas, a spark of inspiration, or just some good old fashioned data banter, you're in the right place. Views expressed by me and our guests are our own and do not reflect those of our employers. So grab a coffee, get comfy, and let's dive in. Welcome, everybody, to the latest episode of how I Met Your Data. Today I'm really excited to welcome Jake Krish to talk to us about a genetic AI and everything we ever wanted to know about this ever evolving topic. Welcome, Jay. Would you mind introducing yourself to our listeners?
Jay
[00.01.05]
Yeah. Glad to be here, guys. Uh, hi. Anjali. Junaid. Uh, so for your audience, this is Jake Craig. Um, I have been, um, in the finance industry, financial services industry over the last 20 years. And ten of them, I would say focused on either data or machine learning or AI in that field. Um, I work for one of the largest, if not the largest custodian in the world. Um, State Street Bank. Um, and happens to be also the oldest bank in the US. Um, with that, I would say anything that I express here, um, in this, uh, podcast, uh, is of my own opinion and does not tie anywhere back to, uh, my employer. Um, and these are not realities of the company or their positions or so these are my personal opinion. Having said that, I really, um, appreciate being here with you guys, who I consider also as experts in this field. Um, to share the opinion, to understand. Um, and to really have a conversation. Uh. Excellent. Couldn't be.
Junaid
[00.02.22]
Couldn't be happier to have you here. Um, j I've always told you this, Jane. I'll flatter you again, which is. I love your presentations. I love your thoughts and thinking, uh, on on these topics. I couldn't be happy. Happier
Jay
[00.02.35]
to have you here.
Junaid
[00.02.37]
So maybe just jumping into this. Um, and I, I agents. Can you give us the overview so we can ground some folks on the topic as we get into some of the deeper concepts?
Jay
[00.02.49]
The simplest way to understand a genetic AI is augmented large language models that are chained together to accomplish a particular goal. Now, that same terminology is being used to reflect on a architecture that could be stringed together with large language models and the latest, um, orchestration techniques such that we can reason, we can decide, we can, uh, allow systems to act on their own to accomplish a particular goal. That's the change. It's not just systems that follow a particular flow that is prescribed by humans. These are quote unquote living systems that decide what the goal is. Understand what are the variables that they're dealing with. Devise a plan of action that they can follow. Continue to check whether they are accomplishing such a goal. Revise if they need to. And then steps to do it. And then do it. And so we are looking at a complete paradigm shift of going from simple AI systems to these highly sophisticated, simplistic and effective organisms that can accomplish fascinating things. So
Junaid
[00.04.11]
it's beyond just automation where you're repeating a single task, right? Just it's you do one task and you do it repeatedly without any change in the way it's executed. Um, and it sounds like the way you're describing it to us is that there are always variables in life, right? There's. And those variables are always changing. And so your automation needs to change its execution based on any variable changes. How would you get started in making a shift from AI, which is largely machine learning and natural language processing based to a genetic AI?
Jay
[00.04.50]
Yeah, that's a really good question. I actually touched a touch upon this topic in my latest article on my Substack channel as well. What are the top five things you can do to get started? How do we go about doing it is really important. And there's a lot of hype about this guy, right? We are looking at the agenda AI and it's a rudimentary algorithm. So an example that I touch upon in my article is a slime mold. Slime mold is an organism that does not have brain. It has only a string of cells that we can possibly call it as a network of cells. But yet the slime mold has shown remarkable intelligence that it can find food. It can solve puzzles. To do that, it can go through obstacles. It reasons what is good and what is bad. And it, um, adapts to the environment as things change. The genetic I or the architecture that can put a genetic eye with the rest of the ecosystem can accomplish that. So when you think about bringing that into a firm or a new opportunity, you really have to think, do you understand these individual components? And do you know how to accomplish simple things with these components? Right. That would be the first step. Um, the second step is to say, what am I going to automate? The goal in mind. So rather than taking a solution and saying what is the problem that I have to fix, what is the problem that I'm trying to fix, and where do I go to address these problems? What are the components that I have that I can use to, um, solve it? And then having a framework accomplish it. And I again touch upon that in terms of maturity. Um, uh, of of autonomy. Right. So how we can hand over autonomy to agent AGI and slowly accomplish this. So top three would be understanding the components of this architecture, knowing what problem we are trying to solve, and taking simple use cases in three. Having a graduated autonomous framework,
Anjali
[00.07.02]
there is this inherent expectation of autonomy of decision by by your agents. So how do you build safeguards to ensure that agents are behaving appropriately and deserve the autonomy that they've been given?
Jay
[00.07.21]
I'm Julie, this is a great question as I'm looking at the opportunity, I'm also concerned about what are the risks on the other side. I think I would juxtapose this question with how we as human, um, see autonomy with our children. Right. So human children do not automatically have the decision to do whatever they want. I think seeing kids grow from a toddler to a, um, to a middle schooler to a high schooler, they have to earn their autonomy. If they do something good, they get more freedom to do more things. If they don't do something good, then their freedoms a little taken away. So they correct their behavior. Um, so that earning an autonomy comes with maturity, maturity and decision making, but also transparency. You know, if the human children are more transparent about their struggles and this, you know, have a good communication back to their parents about what where they need help, where they do, not, where they're confident in pursuing those activities and where they're not. That allows them to, um, the parents to then really have a meaningful conversation. Uh, my my sister was, uh, referring to my, uh, my nephew yesterday, and he's, um, he's becoming 12, and he wants to go on his tour across Toronto on his own because apparently he knows everything. And there's a conversation about what he could and he could not do. And he's like, I'm a big boy. I can do anything. So I see that conversation playing in real time. And that's an example of how we have to approach, uh, agent Guy. So to see us, the framework that I advocate is, um, a place where we start with simple agent workflows. Right. So the simple way genetic workflow would be chaining together prompts that can make simple decisions based on what the humans ask or already have prescribed for it to be asking. So this doesn't have any major decision making. It only requires certain changes to the decisions as things progress. But most of the decision is still retained with the, uh, with the humans. So whether it is memory management, whether it is augmenting, uh, augmentation, whether it is correct course correction, all of that still stays with uh, with humans and only the, the, uh, simple incremental steps are done by agents. So that will be the first step of a genetic workflow. Um, I do not call it agent AI. It's still a workflow. The second step would be where the humans would still perform complex tasks with simple decisions could be made by agent AIS. So you have a human in the loop still making complex decisions. So an example would be like a human would say, go find me all the articles, all negative news about a particular farm. And that would involve the agents going and looking at, um, news articles. It could be anything related to not only the forum, but also the owners of the forum and anyone who's somehow tangentially related to their firms. Another agent could be reviewing what the summary looks like. Another agent could be verifying the sources from where it is retrieving, and making sure that the sources are actually attributed. Another agent could be looking at the tone of the of the summary. So you have different agents going and doing, performing different tasks and coming together and providing that summary while still accomplishing the task at the end of the day. The humans still decide if the summary is in a good position for it to be published. So that'll be the second step, right? The third would be you hand over. Now this is the really important transition point. And a lot of things have to come together to make that third transition, which is you hand over the complicated tasks to the AI and you stay with the simple. Now this transition, I'm thinking between 12 to 18 months I could be wrong. And it is happening in smaller, less mission critical applications today. But it is going to continue to happen because people are looking for complicated decision making to be codified. So I'll take an example. Um, let's say you wanted to balance a portfolio, and that balancing of that portfolio has to happen at a frequency that you can't afford to do with your current workforce. You could make a decision in the next 12 to 18 months to automate certain parts of that rebalancing, including connection to algorithmic trading, risk management, risk appetite of your client, all the analyst reports that are out there about that particular industry or sector. So you can externalize these decisions that people make using a lot of AI today and a lot of machine learning today. But still, humans are making those decisions, deciding whether to come out of a portfolio, go into something. Those decisions could be created and given to AI to, um, continue to rebalance as long as it stays within a particular boundary. Now, this is a very important transition. I keep saying why? Because any mistakes here scales at an exponential rate. It's not linear anymore that you make a mistake. In one summary, the previous example I talked about. You know, you make an ad or I makes an error in identifying a wrong ownership structure for a news article and produces it. The human did not look at it and it's published, but it is still targeted. It's small. It's one news report about one firm that, you know, you can always redact when you're going to the next stage where you are allowing AI to make complicated, complex decisions. The failure scales, it scales exponentially and the impact is equally damaging. So if a rebalancing portfolio takes a hit, who's to blame? The model developer, the decision maker, the model owner, the risk manager, the portfolio manager, or even, you know, the worst case is you start blaming the vendor. Although we have used them, um, it could be anyone. Right. So the ownership of the risk of these AI failures is going to get complicated. Um, because. You will not know where the problem is. How did it occur? How do you come back from it? Should it have stopped? And what are the guardrails you have? If eventually crosses the boundary where it is making decisions that are uncontrollable right now, I hopefully I scared you enough. But once you cross that boundary, the next step of no human involvement, I have yet to visualize in my own head for a critical application. I have not seen how a fully autonomous null human agent AI architecture would look like in at least in the next 2 or 3 years.
Junaid
[00.14.35]
Lots to unpack there. I think I like your summary, which is like when you talk about your three phases, like when it comes to implementing authentic AI, you start with what you call workflows, like very straightforward workflows. And then your second part was you said you would start stringing together prompts and stringing together like a chain of tasks for it to do. And then in that second part, you'd still have human involvement. And then in your third phase, it would be almost completely semi-autonomous. Yeah. Or it'd be essentially autonomous in the third phase. Um, but you kind of kicked over an interesting rock, which is who owns the
Jay
[00.15.18]
risk.
Junaid
[00.15.20]
Right? So there's the model design, right? To your point, there is the data for AI, right? The data that you use for the model, um, what were some of the other risks as a recap that you, that you highlighted? I think it was like model design data o potentially vendor risk, right. How do you really create the matrix or identify the matrix ownership and mitigate risk?
Jay
[00.15.49]
Yeah, that's a really good question. Some of the ideas that I'm pondering for the next. Not the next article. The article after that is related to that. Um, but there is no straight answer today. But here is something that we can leverage from how the auto industry is looking at it. Right? So when you think about it as fully autonomous self-driving automobiles right now, there is a debate in the courts about who owns the risk when the car runs over. Is it the driver? Is it the pedestrian? And I'm hoping that I'm not making light of anything that really happened. You could see that how these arguments could be construed, but it could be the, you know, today, the conversation, you know, is it the driver? Is it the car manufacturer? Is it the person who installed the software? Is it the person who makes the software? Is it the the town that has not put the right guardrails for these things to operate, yet allowed the cars to function? Is it so? You could just keep going with this. And the decisions are you know, it's a case by case. In this case the driver dozed off. In this case the software had a glitch and so on and so forth. Right. That framework, um, or the lack thereof, allows us to see how it is going to have an impact when it comes to non mission critical applications or anything that relates to some of the ideas that we talked about. But if you if we just postulate what are the different personas that involved with these implementations. Right. You have the models that are being built out on a regular basis by major firms that are being widely labeled within the within enterprise. So those funds still will own some part of the risk, although it'll be hard to argue that they take the responsibility of any liability. Then let's come into the enterprise itself and start looking at is it the design of the architecture that shares the blame? And I would then say, yes, there is some responsibility in terms of how it was designed. And if there are failsafe mechanisms that were built in, and was there a way for any human to get involved, if it if it did cross outside the boundaries, um, or that it required a course correction that was never thought out when design happened. Questions like this evolve. And, you know, the folks who built that architecture without thinking about these guardrails will have to share some responsibility now. Let's go to the next level. Whoever is the model sponsor or the model owner, as we call them. Do they have the responsibility? Yes, they will have to then take the responsibility. Most of the time it will have liability constraints. Now let's pay that out. If we put all the responsibility of the risk on the model sponsor of the model owner, that will stifle any progress towards adapting these models, because no one person can be responsible for the implementation of this entire architecture. And so they will be much more reserved in terms of going ahead and providing that level of sponsorship. That level of ownership for that, because all the risk falls on them. Having that concentration of risk on the model owner or the model sponsor is also not a good idea. Then now let's go into who is the ultimate beneficiary of the model. So it could be in the previous case it could be a portfolio manager. Their clients are the ones who actually got affected. So they are actually their reputations is in damage. And to me that boils down to that. Who is finally getting infected? The clients are being affected. The person who is responsible within the enterprise, who's servicing their clients or the team is getting infected. So where I'm leading this conversation is when we implement these AI, it's not a replacement of the people, and we approach that way and we say, okay, how do we then take this team that is supporting today these elements of agent AI and work within that framework where they can build the guardrails around it, where they can continue to evolve it, where they can fine tune this, this models where they can look for competition and adapted so it's not taking place. These guys are taking the place of human beings, but it's actually complementing their work by making them more efficient, making them more effective, and being part of the ecosystem. So if they are in the loop, even when it's a fully autonomous agent like I in the workforce and it is functioning and I'm not seeing it in the next 3 to 5 years, but let's just assume that we are there. Even then, you have to have some level of human interaction at a regular pace to keep up with that change every time. It is a fine tuned system. The greater the chances of it breaking down its component parts. And if you don't have an organization that can support that fine tuned machine, everything takes longer to fix, and every other services that you offer will be in jeopardy. I think having the human team involved, even after the AI becomes fully autonomous is critical for enterprises.
Junaid
[00.21.00]
It's interesting when you have the whole risk conversation. I think, you know, like I think it will indeed be tricky. It'll be interesting to see how the industry plays out, which is how do you identify all the risks. How do you quantify the risk or measure the risk because there's always going to be some level of risk. It's not going to be a risk free exercise. And then articulating like a risk appetite and saying, I can live with X amount of risk. And then I think being really. Clear eyed about how you, um, how you provide transparency on all of these is you have a matrix ownership of risk. You have a way to measure or quantify that risk. And you ensure that that the way you measure that risk aligns to your risk appetite. I think it's going to be interesting to see the frameworks, um, and what the industry standards might be identify is a paradigm shift. You think it's going to fundamentally change the way that we that we operate as a genetic AI becomes more prevalent and its implementations are more complex. It'll require less human in the loop. So what do you think will happen to the people who are doing the jobs that a genetic AI will potentially replace? What is the outlook for the for the future
Jay
[00.22.19]
workforce? Yeah, that's a loaded question. Do I believe all human jobs are safe, uh, in the next 3 to 5 years? No. Do I think that everyone is going to lose their job because of aging? I know the answer is somewhere in the middle. The question that I asked myself and I talk to my mentees and my mentors about is how do we prepare ourselves for the next three to 5 to 10 years, 20 years down the line? Right. Um, and the way we should approach that is we approach any other transition transformation that has happened in the past the industrial revolution, the information revolution, um, the internet revolution, the social network revolution that happened, the chip revolution. Now we are in the AI revolution, right? All of these revolutions have happened, and it has fundamentally shifted how we look at things, not just how we operate, but how we perceive the world. I've read this book a long time ago. Who moved my keys than I have? And there's there's this parable about what type of activities this mouse would take about who move that cheese. And the parable leads to instead of figuring out who my cheese is, that to towards what is likely going to happen, and how am I going to position myself to be there when it happens? That, to me, is where we need to lead our workforces, rather than having a fear based approach to this.
Anjali
[00.23.50]
So, Jay, what advice would you give to somebody? Say entering the workforce or looking to stay resilient in the face of changes that are quickly unfolding?
Jay
[00.24.04]
Well, let's think about all the robotic automation that has taken over, taking away all the, um, workers that are, you know, with the high school grad graduation rate. So they how did they adapt? They learned. They retooled themselves. They trained, they went into industries that are compliments of the people that are in, in, you know, natural gas industry and fossil fuel industry are moving into, uh, solar industry. They're moving into windmills. Windmills like these are real shifts that are happening. And there are people who have not, and they're kind of stuck. So we take that as an example. Anyone that is sitting today wondering what I should do now, given I is coming, should ask themselves with knowing what they really, really, really good at. If you don't know what you're really good at. No one is going to know that. What you what drives you, what you're passionate about, what you're really good at, that you're so good that society will find you, pay you, and employ you to do what you need to do. That is the number one thing. If you don't know what that is, somebody is going to tell you what it is. Know what you are really good at. That's the first thing. Um, the second thing is to say, where is it leading? Well, it is leading towards a lot of math leading towards probabilistic scenarios. Leading with highly efficient decision making is leading towards automation. We need to be thoughtful about as career professionals to look at these trends and see how do I retool myself, how do I upskill myself in these areas? And the third most important thing is being absolutely committed to connect with professionals, other professionals in the industry. We always do that as a side thing. Say, all right, yeah, yeah, I have to reach out to that person. Sure. I will send that email later. Yeah, I have to call this person. Humans have to band together. There's no human versus human anymore. It's human versus I. Right? In the most simplistic term. And what is the one thing that we got it better than I has? The network. It's knowing each other. It's talking to people. It's having that connection. A it can never beat that. Two people in a room can solve more problems than ten agent workflows, orchestration techniques that we built using six six. Right. That's the truth. So if we put that central premise that we are going to know one person a day, no matter what, then I'm in the gym, walking or having a coffee, talking to people. If we put that in San Fran Center, we will develop as an organism better than we have been. Be more empathetic to the other rest of the humanity and build the competency necessary to to evade the storm. Is
Anjali
[00.26.52]
there anything about AI that scares you or gives you pause?
Jay
[00.26.57]
I'm not a doomsday, although the only place where I see that there's a concern to me is, um, warfare. Right? There's a new autonomous fighter jet that was launched in 60 minutes that was introduced. That, to me is concerning. Um, and again, pilots, policymakers, this government that I know will do the right thing and, you know, do not let robots make decisions about human life.
Junaid
[00.27.22]
Interesting. One thing, you know, you said at the beginning that I thought was really interesting is that that.
Jay
[00.27.28]
This. We're in the midst of a revolution, right? The I gen I authentic I revolution. But this revolution required several other revolutions, right? The I would like, you know, maybe say it in a little different way. Like the ability to have so much processing power, like processing power was limited in. You know, I first showed up in the 50s, I think sort of 50s and 60s, if I'm correct. Um, and so since then, if you think about the revolution that took place, um, you it required the ability to have almost unlimited processing power.
Junaid
[00.28.07]
You need almost unlimited,
Jay
[00.28.09]
um, capacity. So the ability to look and store at, uh, a tremendous amount of data. And so we don't no longer have a capacity problem. We don't have a processing problem. And because those are not problems anymore, there isn't a cost problem as
Junaid
[00.28.25]
such. Um, for those, at least for those components. So you have all of these revolutions of sorts.
Jay
[00.28.32]
Well, that's around that story because there's an interesting story behind that. Um, and I'll go to the other ones. So in the 1970s, 1969 or so, the first paper came along about how to store, um, information, which is not a flat file, because back in the day it was only just flat files. Right. So then if you have two flat files, how do you make relationships? Well, they imagine like, okay, if I have a key, I have a primary key and a foreign key. I may be able to make that decision and say, here's this fact tables, and here's the relationship tables, and here's how I would make that connection. So then they started building these the systems that can do that. As things are evolving. Oracle came back and said, you know what guys it's going to take a lot of the memory was expensive. It was expensive to the fact that if you take today's memory, lets say a gigabyte of hard drive. That was in today's dollars. It will be almost $100 million worth of memory, right? If you think about a gigabyte of Ram, double that. Right? So storage was really, really expensive. Concrete was not because we were still dealing with ones and zeros. So what they decided in the 80s is like, well, let's not store the actual data. Let's store the structure of the data and we will then deal with the processing later. So in other words, they started storing they started storing the columns. Right. And the relationships of the columns. You write a SQL query then pull all these things together. So the storage was no longer expensive because you're only storing just bits of information. You're not processing all that information. And as you're writing the query, you're synthesizing all these relationships and producing that result, which then can be used. So the cost was shifted from storage costs. To processing cost. Oracle came and. DB2 and all these. Database companies thrived for so long because storage was really, really expensive. And at 90s when the early versions of cloud came about, storage started becoming cheaper. So storage was no longer a problem. So now compute is getting the question. So Intel and other AMD and all these firms are trying to pack as much silicon as possible to get that compute. And that comes in media and GPU that transformed that. Right. Yeah, that's another good component, which is that there's the hardware revolution. That is the underlying hardware revolution that's required to also support this. And you pulled on another interesting thread that got me thinking was how I could be potentially used in warfare. Right. To your point, it's inevitable. And then when you think about the the history of warfare, right way back when you was the size of your army. So like you had infantry people on foot because that's just what you had. And then you shift to carriages and then whoever had like the carriage technology and then it shifted to ships. So then that's how the like, whoever had the latest technology is who became sort of the new world power, right. You had like the British Navy and you'd have the French Navy. And so it went from like. Like foot soldiers to chariots to shipping. So like these are like an evolution in, in technology. And you just you can
Junaid
[00.31.56]
play it right. You keep playing playing it forward, which is then you go into, uh, machinery and airplanes and this is indeed. And like another, uh, I mean, I hate that we're talking about, uh, weaponizing AI, right? It's an inevitable conversation, which is this will be the the next determinant of a new world order is like, who has the ability to have a agent or AI, uh, process a number of things, like control airplanes or things like that. Let's hope that there's some AI cases where, uh, somebody is implementing AI to do surgeries or something in underprivileged areas, hopefully.
Jay
[00.32.40]
Oh, it's already happening. Uh, well, not autonomous, but the AI, the robots are controlled, uh, over the internet by doctors to operate on specific tumors. And if you think about it, the command to determine, uh, between the brain and the tumor inside the brain. I mean, they look identical. The tumor and the tumor inside the brain. They look like they. It is incredibly difficult to know that where where that line ends. Um, and that is already that research and that it's already happening. It seems, um, it's incredibly how medical sciences are using that for the good. Right? For once, let's use it for good. Um, but let's go back to that, uh, that analogy that you talked about in, in humans, uh, this book, um, by sapiens, by Yuval Harari. I don't know if you read it. Um, it's one of the fascinating books about human evolution. And to your point, the civilization with the greatest technology always prevails. It's a true, um, statement. And I think history has proven time and again that, you know, we've been fighting since we had clubs and sticks. And so we keep fighting. Um, that. Right. Um, somebody equipped, um, the Third World War will be fought with nuclear weapons, and the Fourth World War will be fought with sticks and stones.
Anjali
[00.34.07]
So funny story side note, completely irrelevant. But when my sister and I were younger, my mom was at work. We were arguing amongst each other and kept calling my mom at work. Write to tell on tell on the other one. And my mom finally got so frustrated with us. She goes, why don't you two just go outside, get some sticks and beat each other? Leave me alone.
Jay
[00.34.32]
So maybe she was on to something that works. That's Angeles, the go to.
Junaid
[00.34.38]
I is, you know, as a as a parent now, I can empathize. You know what I mean? With with with your mom.
Jay
[00.34.49]
That is awesome. Our
Anjali
[00.34.51]
violence aside, Jay, I had a question for you. I wanted to go back to arms and that you were talking about earlier, where I often say, like my little Kindle reader has more memory and more processing power than my first computer or Commodore 64, in the early 80s. Right. I mean, in just in my hands, I have much more memory and computing power than than that computer did that I loved and played games on for years. Right. But one of the things that I think is we've evolved and, you know, just advanced capabilities is we've created unintended consequences as well. And one of the things that I keep reading about is the unintended environmental impact of AI. We're now just given the sheer volume of data and processing power that's required to fuel our AI ambitions. There's just this uptick in water consumption and electricity needs and things like that. So I was just curious, have you seen any attention applied to the environmental impact of AI as well?
Jay
[00.35.58]
Sure. Um, I think that's a really good way to. Look at what's the. What's the cost? Who is paying it? Right. The the benefits that we all want and share. Uh, is is asymmetric at this point in time. Like, it's obviously skewed towards only wealthy, wealthier nations. I do not know that this is. That is correct. Apparently, there is a coal based, uh, plant that's going up in China every sixth day, every sixth day. So we talk about how the China has come up with, you know, um, solar and electrical and nuclear and all these different types of power, but, you know, just to know and it's maybe an anecdote. I have to verify the data there. But to know every sixth day there's a coal power plant that's going up in China is not a good stat. And they are one of, if not the leading contenders for the US in terms of AI. Right. And we can see how this could destabilise the ecosystem if we continue in this path and the damage to the ecosystem is obviously exponential. You know, it's not linear. So care needs to be taken, and I don't see that being a conversation at this point in time in the industry. Um, there is a return of the greed. Um, in the last 12 months or so about how do we make this in, um, this new technology into a moneymaking machine, and there's no guardrail for human greed? I'll tell you that, uh, we know that, right? So that's where it's important with people, with your level of thought leadership, to bring this topic and have a framework to have equity in terms of, you know, the the climate and the environment, having an equity, just as people having equity in this. Right. So Janine's question was like, how are you going to stabilize the impact to the civilization and to the workforce? And how are you going to mitigate while you're embracing this Revolution equally is the climate and the world and where we live in and how we are going to leave to, you know, the children of the future. I think that's an important conversation, Anjali. And, uh, frankly, I have not in all my conferences or when we talk, it's a passing thing. Um, it is not, um, something that is brought upon as, as often as it should be. But let me try to pivot this into a more positive spin. If I think about how we build these equity into, uh, engineering workflows and agenda. I think the first thing what we need, we really need to do is to have a, uh, a conversation that leads to benefits versus, uh, cost, um, risk versus reward, supply versus demand. So the eight lenses that I call them, right. Fear and greed, risk and reward, supply and demand, um, pros versus cons out there. So we put these eight lenses and really. Look at that problem set and see where do I give? What do I get? And once you start making decisions based on those eight colored lenses, you have it. At least you put some thought into it and saying, yes, I am going to give up this performance so that I can bring in transparency. I can give in a little bit of speed for bringing more privacy concern. Right. So you have to start really making that trade off because you are starting with those different lenses about how do you solve a business case. And if if we all start taking that methodical approach about implementing certain things with how do we really provide agency to to these unspoken parts of of the equation, we can then start having a mature conversation towards implementing better solutions. Having a holistic view of the problem. Uh, having resilient systems do not fail to simple hacking attempts. Uh, having a human based approach, human centric approach, um, that can, um, also bring human along as, as things progress. Um, and then having a great impact to the underprivileged and, um, as a benefit while making these kind of things. Right. So I think, I think having a broader perspective will, will help us lead there.
Anjali
[00.40.35]
Jay, thank you so much. This was a fascinating conversation. I can't believe how much ground we covered. And I'm so excited to see where this this topic goes next. So thank you again and we'll talk again soon.
Jay
[00.40.47]
Thank you Anjali. Thank you Junaid. Bye bye.