On this episode of Policy Outsider, guests Mila Gascó-Hernandez and Kevin De Liban explore the existing and future roles of AI in government systems and the risks and opportunities associated with increased integration of these new tools.
Guests
Transcript was generated using AI software and may contain errors.
Joel Tirado 00:04
Welcome to Policy Outsider presented by the Rockefeller Institute of Government. I’m Joel Tirado. On today’s show we are going to dip our toes into the artificial intelligence conversation. Our guests are Mila Gascó-Hernandez, research director at the Center for Technology in Government at the University at Albany where she is also an associate professor, and Kevin De Liban, the founder of Techtonic Justice, a community-based effort to strengthen local justice movements against the harms of AI. The discussion explores the existing and future roles of AI in government systems and the risks and opportunities associated with increased integration of these new tools. That conversation is up next.
Joel Tirado 01:01
On today’s show, we’re going to talk about AI and government. I don’t like to talk about AI because all I do is talk about AI in every private and public space that I’m in, but we’re going to talk about AI and and hopefully we can move beyond some of the more I don’t know reactionary or typical conversations that I tend to hear across various spaces, which is either AI doom and gloom or AI optimism to the point of absurdity. So this morning, which we’re recording on April 22 it’s a Wednesday. You were both at a round table. What was sort of the purpose of this round table? Because I think it kind of tees off what we’re going to talk about today. And I don’t know which one of you wants to start.
Mila Gascó-Hernandez 01:56
So we had this round table co sponsored by the Center for Technology and government, which is a university wide Research Center that works in information and technology in government. And of course, in the last few years, most of our work has been on AI. So having Kevin as our guest for the day, we thought it would be interesting to have a round table that talked about AI, public values and democracy, generally speaking, given that AI we we say and we study that is making a huge impact on public values and mainly guaranteeing public values. Kevin has has worked on these topics from a more practical point of view. So, so we thought it would be interesting to have several people discuss these topics.
Joel Tirado 02:46
Great. And so, yeah, Kevin, you know, as Mila just mentioned, you’ve worked on this. What? So what qualifies you to talk about public values?
Kevin De Liban 02:54
Sure. So I was a legal aid attorney. For those of you who don’t know, Legal Aid attorneys represent low income folks in all kinds of civil legal matters that’s nothing criminal and nothing like personal injury related. And it was while I was a legal aid attorney, I was representing several clients who had disabilities, like cerebral palsy, quadriplegia, multiple sclerosis, who were on a Medicaid program that provided home based care so that people could stay in their homes instead of in institutions. And that’s better for folks independence and well being overall, and it’s also much cheaper for the state. However, without any prior warning, without any notice, they had their benefits cut drastically, to the point where folks were lying in their own waist, were getting bedsores from not being turned and just suffering immense horrors, and when we’d investigate it, we learned that the state had implemented an algorithmic decision making system, something we would now definitely call AI, to make those decisions about care. We fought it successfully. We won. We got rid of that old system. And that’s one of the relatively few examples of successful advocacy against kind of AI based decision making in the country. And so that’s I’ve done it, and then also I’ve done it in a lot of other contexts, recognizing kind of the harms and dangers of AI for for low income folks.
Joel Tirado 04:09
And you may have mentioned this at the start, there the state in this case being it was Arkansas.
Kevin De Liban 04:14
It was yes,
Joel Tirado 04:15
okay, we are working out of New York State. So many of our listeners are New York state based. So just want to clarify for folks, this was in Arkansas, although the same concerns, more broadly, I think, apply across all of all of these systems.
Kevin De Liban 04:30
Yeah, and in fact, there have been some some movements in the state of New York to adopt assessment systems and even algorithms attached to those assessment systems that share some of the problematic aspects of what we faced in Arkansas,
Joel Tirado 04:44
interesting and the kind of so your organization, then, is it engaged in a little bit of like information sharing about these types of experiences?
Kevin De Liban 04:55
So yeah, we exist to kind of fight the ground level harms that AI is causing low income. Communities around these issues. So we mix legal advocacy with organizing, community organizing, public education and kind of narrative advocacy to try to push back in multiple ways on the various harms that these cause. These systems cause low income people when they make decisions about, you know, where we live, where we work, where we learn and survive and all of those kind of core aspects of our lives.
Joel Tirado 05:27
And so Mila, where does your research overlap with the work that Kevin and his organization are doing?
Mila Gascó-Hernandez 05:33
Yeah. So, as I was sharing before, the Center for Technology and government, we have been doing research and helping governments, we do both academic and what we call applied research, helping governments in their digital transformation processes, and for many of our current clients, that means AI, helping them understand and assess to what extent they can adopt, implement and use AI successfully and our research works on adoption and implementation use of AI, as well as the impact that AI has on the organization in terms of public values, in terms of processes, in terms of new roles, new tasks, new skills, and also in terms of the relationship between the government and the citizens. One part of our research relates to public values. What’s the impact of AI on public values? And for us, public values are anything and everything from transparency, accountability, participation, equity, fairness, so So those are, like, kind of the values that public service,
Joel Tirado 06:45
you spin those off pretty easily. I feel like you’ve, you’ve said those values in sequence before,
Mila Gascó-Hernandez 06:51
those, those, those values are the values that these public servants guarantee when they interact with their citizens. And clearly, these values change because of AI, and we have realized that in particular, when public organizations serve marginalized communities, these public values change dramatically when AI is involved. And so I think that’s where our interests, research practice interests overlap a little bit. We are currently doing a project, a three years and a half project funded by the Institute of Museum and Library Services, where we are looking at how public libraries can play a role in promoting inclusive civic engagement around AI. So we argue, and this was a discussion that we had this morning. Very interesting discussion. We argue that community engagement around AI and co creation of AI initiatives might be, you know, a way to make sure that AI is used in a successful way. But of course, this means that the community needs to engage around AI, and many people do not know what AI is about. They do not know that governments are using AI. So we argue that public libraries can be the safe and trusted institution that can inform about what AI is, build capacities about AI and also provide these spaces for civic engagement, inclusive civic engagement to take place, and these public libraries on top of everything work, particularly with marginalized communities. And so this is kind of an example of a project that, in a very different way, would also have the same goals of what Kevin is doing.
Joel Tirado 08:50
My first comment is just that I love libraries, so I’m glad we got to start there. And then I’m curious about what that looks like in in practice, like, how does you know? It does feel like the library as an institution in society has a real has been playing, and there’s opportunity for it to play an even larger role in, you know, its contributions to the Civic elements. But what does that look like in practice? You know, if you’re a library director, I don’t know if you get down to that level
Mila Gascó-Hernandez 09:28
we do so in practice, this has developed in three different ways. One of them has been to offer programs and services that are about informing the public about AI. These are, for example, one hour webinars or one hour seminars where someone talks about AI and what AI is and the benefits and the challenges and the risks. Then there is a second set of programs and services that mainly are. About building capacities. And these are, for example, workshops, several weeks workshops where people engage on using chat, GPT for resume building, for example, or where children and the youth engage in coding and learn how to build a robot or learn how to develop coding skills. And then there is a third set of activities where the libraries this space that I was referring to, that brings the residents, the community, together with the government officials, with the public employees to make decisions, in this case, about AI. I have to say that this type of programs and services we haven’t really seen in our study, but there are a few cases out there. Very recently, the public library in Queens, for example, had a town hall precisely to discuss AI policies for Queens, and so they brought together all these people from the community. They interacted, interacted with the government, and then they decided together a few guidelines to implement AI and for the government to use AI. That’s actually for us these third kind of programs and services. What for us is really inclusive civic engagement. And so we say, informing about AI and building capacity on AI is important. There are sine qua non conditions, if you will. But we need to advance to really once people know what AI is and how to use it, to advance to that co creation with citizens of policies, of guidelines, of strategies and of technical initiatives themselves as well.
Joel Tirado 11:57
So Kevin, do you see libraries in the work that you’re doing as like a potential partner for you know, you mentioned that the communities that you work with are largely low income or marginalized communities, so oftentimes there’s an overlap between library programs and library goals with those same communities. So do you do you see them as potential partners here? Or how does, how does that?
Kevin De Liban 12:21
Yeah, I mean, when it comes to libraries generally, they’re a main community hub for many low income people. It’s the way many folks access internet service. It’s the way many folks access a printer, certainly, books, movies, everything else. So I think, like the value of them is sort of a hub of information sharing and public education certainly is, is, is relevant. I think the information that we would share at the libraries would be much more focused on the ways that powerful people, whether it’s government, actors, landlords, employers, school districts, whoever it might be, are using AI to make decisions about all of our lives with, with really risky and harmful consequences, in particular for low income folks. So I think our contents of the library would be like, Hey, look at the ways AI is showing up in your life. Watch out for it. Have any of these things happen to you? And if so, here’s how we can fight back, both individually and then kind of collectively, through various kinds of movement based efforts and legal strategies and organizing strategies and so forth.
Joel Tirado 13:26
So you’re, you know, what I’ve heard you describe so far is a lot sort of on the receiving end of policy, if that’s a way you could describe that, right? So it’s, it’s folks who are oftentimes suffering the negative consequences of policies that already exist right
Kevin De Liban 13:45
exactly, or that that these powerful people, government actors or private actors are considering and figuring out and implementing, oftentimes experimenting on low income communities first.
Joel Tirado 13:56
And so do you see a role within your organization for the sort of other end on the policy creation, policy building, policy analysis,
Kevin De Liban 14:05
yeah, very much. So. I mean, I think one of the reasons that these harms are allowed to proliferate, and just to give all the listeners a sense of things, right, there’s my brief story about how AI is used to cut Medicaid, home care benefits. It’s being used to decide eligibility for all kinds of public benefits, food snap, which is food assistance, Medicaid, even in Medicare programs. It’s in Social Security programs. It’s being used. It’s being used to decide whether or not you can rent an apartment and how much rent you’ll pay. It’s being used to determine whether or not you get a job throughout the hiring process, and then to manage you while you’re working and to determine engage your productivity. It’s being used in schools to predict the likelihood that kids in the future might drop out or engage in criminal activity, and then having really punitive consequences around a future prediction. So it’s being used in all these really fundamental ways that are allowed to happen because there’s lack of regulation. In any sort of meaningful accountability mechanisms around most of this stuff, and it’s powerful people deciding what they’re going to do. So, I mean, one of our, you know, our whole reason for being is one to help people survive the immediate harms and just endure and make it through. But more broadly, to help focus these harms into kind of a more collective, organized resistance so that we can push for meaningful and actually effective policies that will protect people and keep people from suffering in the first place. So
Joel Tirado 15:30
I’d like to hear from both of you on this what, what is that sweet spot, right? What is that line that we walk, which is okay, we have, there’s a new tool in the toolbox here, undeniably a new tool. And boy, is it good at certain things, right? And the, you know, I’ve heard people call it, what a force multiplier, right? You know, more terms that I just get sick of. Sorry, I’m going to try not to get derailed. But what? What is this? What is this line? You know, we have this new tool. We see the, you know, the harms you’ve experienced firsthand, the harms that it can cause, and the people that you’ve worked with, but at the same time, wow, it’s effective in doing certain things. So, so where do we go? How do we walk that line? What do we you know, what should we be keeping in mind as we try to pull these policies together? And since you know, I’m looking at you, Kevin, why don’t you start and then I really want to hear your
Kevin De Liban 16:32
perspective. I mean, I actually, I think they need to be treated as incredibly dangerous for the immediate focus, like my, my concern is not future robot overlords, Terminator kind of stuff, right? Like we are focused on the here and now this technology is being actively used to restrict people’s opportunities in all kinds of these core life areas, and it’s being allowed to happen without any sort of regulation. So I think there needs to be bans on uses to make key decisions about people’s lives like I just don’t think it is a legitimate speaking in public values terms. I don’t think it’s a legitimate function for AI to decide whether or not you get health care, to deny you health care in particular.
Joel Tirado 17:15
Can I jump in ash? Can it assist me in making those decisions?
Kevin De Liban 17:20
If the ultimate result is an approval, then I think it’s a it’s more defensible than when it’s being used to deny services, and this is because of the unequal burdens of what it means to have to fight a decision that’s informed by AI or made by AI.
Joel Tirado 17:34
What does that last piece mean?
Kevin De Liban 17:36
So if you have a decision made about you by AI first, you probably don’t know that AI of any sort was being used, you don’t know how it works. And even if you figure out how it works, you don’t know why it works that way. And so you’re in no position, either individually or kind of collectively, to fight back against a decision that’s made about you in these circumstances. That’s an incredible burden. And right now, instead of the burden being placed on the people using these technologies to justify it extensively upfront that this doesn’t lead to these destructive harms, that this is some sort of net benefit, that we’re going to put everything out there that you need to know so that we could correct something if we make an incorrect decision, the burden is totally on the people who are subject to these decisions to fight back against it. And along with that, a lot of times, the AI is being used to achieve an end that wouldn’t be possible without the AI. So it’s a surreptitious way to cut benefits, right, or to depress wages, make workers less stable to, you know, make renters have less market power. And it’s being used for these fundamentally, I think, unjust ends that are entrenching existing power dynamics. So I think in most uses, it’s like, look, ban it where it’s permitted. It has to be extensively vetted. With all sorts of I can talk more about what vetting means, but like super intensive disclosure, understandings, accountability lines, you have to have ongoing oversight, easy off switches, and there have to be real consequences when it harms people, right? And what real consequences mean? There’s like, significant financial penalties that are enough to dissuade somebody in the future from even taking the risk that this would happen. So
Joel Tirado 19:16
a lot of what you’re describing there seems to be about private actors, private companies, although, certainly the government can, can use, could implement AI systems, right? Sorry, I see, I see you’ve got the
Kevin De Liban 19:32
government is actively doing it in especially in public benefit programs. So at the state levels, Medicaid and snap and unemployment insurance in most states, at least one of those programs has some aspect of it that would qualify to be used as AI at the federal level. Social Security benefits are now being determined in some ways, and
Joel Tirado 19:53
so there’s, like, a budgetary incentive to keep the cost down. Yeah,
Kevin De Liban 19:56
that’s what they’re doing. They’re achieving policy ends. And a lot of times those policy ends are. Uh, you know, austerity driven, like, let’s cut the social safety net right at the time when people need it a lot, and we’re a moment of, you know, rising inequality and all these other things that kind of degrade both people’s individual existences, but kind of our collective existence as a civic body, and, you know, as a democracy and as kind of a collective who cares about the public good? This hurts folks on all levels of sort of existence.
Mila Gascó-Hernandez 20:27
May I say something? Of
Joel Tirado 20:28
course,
Mila Gascó-Hernandez 20:28
I think I am a little more optimist than Kevin s and I would like to highlight a couple of things. First of all, this is not new. This has happened with technology before, for some reasons, and we can discuss the reasons it is becoming more visible in the age of AI, but all these type of behaviors we’ve already seen with other technologies we had this morning in the round table Professor Virginia a banks who wrote this beautiful book, automating inequality. She doesn’t talk about AI. She talks about information systems and algorithms, not, not even AI based algorithms came
Joel Tirado 21:10
out before, like the chat GPT revolution. I’m doing air quotes in what 2022 right? So her
Mila Gascó-Hernandez 21:17
book, right? And the problem with companies and IT vendors, we’ve seen for many years, because governments have been outsourcing many things, not only technology, accounting, HR functions, clearly, technology as well, for many years now. So so the problem is not new, but AI, I think, is making it more addressable and probably expanding. I don’t know if that’s the word, expanding the harm and expanding the impact. So that’s one thing I would like to say. The other thing that I believe we sometimes forget as well as the institutional context. In the end, maybe in the US, it is my belief not maybe it is my belief that in the US, the institutional context results in a specific use of AI in government, in other countries, different institutional contexts result in different uses of AI. So clearly, this is a country that historically, how can I frame this so it doesn’t sound bad? Has been fearful of the role of government in society, and so the government has a very small role. It doesn’t have to intervene in the economy. It doesn’t have to regulate. It doesn’t have, you name it, in European societies, it is the contrary, the government has a very much important role, and it is a role that it is accepted, and therefore they have lots of regulations for everything and a lot of legislation for everything that sometimes can even seem too much right, but that institutional context is shaping the use of AI. It shaped the use of technology in the past, currently, is shaping the use of AI. And I think that because of this, we cannot generalize that it’s always bad the use of AI or the use of technology, for that matter. We need to take into account context, going back to that, to that book that we were talking about a couple of minutes ago when I started to read it. And maybe you remember as well, the very first chapter was about the creation of the welfare state, if we can call what the US has a welfare state, right? And I was like, Oh, I thought this was a book about technology. And now I am, you know, reading about what means to be a person eligible for welfare benefits, and the book goes back to the 1900s so it was really interesting as I read through that chapter and then the rest of the book, I understood the message. I understood that what we have today, or what Virginia was describing in 2022 has its roots in that institutional context that what was created back then in 1900 so what we are witnessing today is just the evolution of that. We have certain type of data. We have biased sources of data, if you will, to put an example, because that’s how that data was created in that moment, and we have inherited that way of doing things in the present moment.
Kevin De Liban 24:56
And if I may say that, I mean I think the historical forces. Is that are shaping the institutional contexts that Mila is describing are precisely why this is so dangerous, right? If we’re talking about, you know, a history of racism, white supremacy, if we’re talking about antipathy towards the poor, if we’re talking about, you know, various kinds of social forces that have existed and shape existing institutional contexts, then this is precisely why AI should not be allowed, at least in the decision making context, to further extensions of those kinds of forces, which is, you know, austerity, the shrinking of government and government’s ability to serve a collective, collectively good purpose. You’re talking about privatization, which is an aspect of austerity of us, or off offsourcing. You’re talking about AI allowing additional corporate concentrations of power, which is what we’re seeing, is the big 10 companies consolidate control over it, which has supremely delete serious effects to democracy and to market functioning and everything else. So, I mean, you can’t sit here and say that whatever theoretical potential AI has for Good is actually going to be deployed and realized in an institutional context where we have all of these things going on. If anything, the more likely outcome is it’s going to be used to, you know, control workers, to attack poor people, to experiment on black and brown communities, et cetera, et cetera. And so I think that’s where it differs. So even if you have a theoretical use of AI for good, like, sometimes they offer the medical examples. Like, first of all, it’s not clear empirically that that is true, that any sort of medical advance is going to come. But even if it does, is anybody going to be able to access the incredible, the better health treatment that comes out of it? No, because AI will have denied their their health insurance or their claim to get that particularly advanced treatment. So those are kind of the ways that I’m thinking about this is that no matter what theoretical use you have, it’s good. It’s existing in this context where all the incentives, accountability structures, historical forces are leaning it in a direction that means bad news for low income folks.
Joel Tirado 27:08
So at the risk of mischaracterizing you, your perspective is is largely that AI in its use at these scales by government, by these large, private organizations, there needs to be a movement to scale back that use and largely inhibit its deployment.
Kevin De Liban 27:35
Yeah, 100% I don’t think
Joel Tirado 27:37
Kevin De Liban 27:49
Yeah, almost. I mean, look, I’m also, I’m also pragmatic, and I understand that, you know, we can have long term goals and salute and and visions while recognizing that there’s still a lot of short term questions to answer, but I think yes, if you can slow AI adoption, if you can look to alternatives, you got to remember when we’re talking about AI. So we’re talking about, certainly the consequences to individuals, but also the consequences to society, right, the economic consequences of data centers, which we’re seeing everywhere. You’re talking about the corporate concentration. You’re talking about misinformation capabilities that erode the fundamental ability to share a reality, right? You’re talking about all these deleterious effects that are going on that you’ve got to you’ve got to slow down in whatever way possible, even if it’s not about that particular use case, and that particular use case might be compelling, there’s still all these other considerations to take into account, to try to work against. It’s
Joel Tirado 28:48
almost embarrassing to bring this up in the context of the work that you’ve done, Kevin in Arkansas, right? My main interaction with AI systems, largely through like, chat based LLM stuff, and I am, you know, I’ll go in and say, here’s the, here’s the suite of hardware and software that I’m using to to work on this AV project that I have I’m having difficulty accomplishing XYZ. Can you walk me through the steps? Help me troubleshoot blah, blah, blah, blah, blah, and getting results that are just to me astounding, as a person who spent many years learning new things by going through YouTube videos and reading through forums where people are just being pedantic nightmares to one another. And so it’s so it’s just like, oh my goodness, a well formatted, grammatically correct kind checklist of steps that I can take to accomplish the thing that I want to accomplish. So that’s, you know, that’s what I’m experiencing, is like, wow, this is helpful. This is helping me do the thing that I want to do. Yeah, so I guess you know, how do we reconcile that with these broader, very real harms that that you’re talking about,
Kevin De Liban 30:11
like, we live in unjust world and unjust circumstances, right? And it’s impossible to be a perfectly ethical individual, person in any of these things, right? Like I take ride shares that depress workers wages, because, look, they’re convenient and they’re quicker sometimes where I’m at then, then a cab would be so, I mean, there’s, there’s that issue. I think as individuals, we can navigate it as best as we can, in alignment with with values, but we’re going to be limited. I think the bigger focus is resisting the unjust uses as best as we can and working towards kind of societal protections, like in that use case, in your use case, chat, GPT, like the individual use case, harmless, right? If it’s wrong, what happens you spend more time doing your job, or whatever it is, doing your task, but no like significant, broader, immediate consequences to you. Now, you do have to consider that if a bunch of people are doing that, then we have all the data center and the pollution and energy and everything else considering that that goes along with that. So that is an element. But when you’re using AI for something fundamental, like determining somebody’s healthcare, their benefits, their work, their housing, their education, then I think the individual component of it gets significantly more risky, and you don’t even have the compelling use case. And then, even in government uses, you talk about efficiency like they talk about efficiency like, one, it can make it happen. One, a lot of the studies show that it isn’t necessarily more efficient. Two, you have the question of, even if it creates the efficiency, does the efficiency go back to the public in some way? If you can now do what used to take 10 workers, let’s say it takes 10 workers to do a certain amount of work. If you can now do that with eight workers, do the extra two workers get to spend that time and their energy on serving the public? Or does that just justify cutting the staff from 10 to eight? And I think historically, what we see, given the US is historical context is it’s the cut, right.
Mila Gascó-Hernandez 32:04
But aren’t you assuming, Kevin, that the only way to be correct is not to use AI, but then you’re assuming that humans are always correct, right, and that they make better decisions. They might be slower, they might not be efficient, but still, they’re going to be the best decision when it comes to provide these benefits, this snap, this Medicaid that you’re talking to and about. And I think that’s what I disagree with you. I do not think that humans, if we get rid of AI, I don’t think that that is going to make that decision making better. I’m not talking about efficiency better. I think that we are still going to see a lot of mistakes, and we are going to see a lot of people who are harmed by those mistakes, but in this case, it’s going to be a human making those those mistakes, and
Kevin De Liban 33:02
that’s actually better. I mean, let’s be clear, without getting too much into policy wonk land, but like the history of anti poverty policy, like in social welfare benefits, is filled with racism, both structural and individual. Like there were, there are racist, sexist, ableist, etc, etc, caseworkers that still exist and that make decisions on the basis of those improper things. The thing is, they affect the one person or the 20 people or the 60 people that that their cases they’re working on. They do not affect 10s of 1000s or millions of people at one time as AI does. The other thing is, when they make a it’s not an error. It’s like they’re making a they’re they’re doing something improper. It is much easier to prove that that is incorrect and false than it is with AI. And as somebody who’s had to prove it repeatedly, I can tell you, it is an immense challenge to to try to undermine the legitimacy of an AI system versus a human can we talk about that Sure?
Joel Tirado 34:00
You know is that just what makes it more complicated, like a technical like legal framework, that makes it more common, complicated? What? What is it?
Kevin De Liban 34:10
So imagine so in my case, it was nurses who did the assessment and made the decision about how many hours of care to give somebody. Very reasonable thing. Well, wait, you came out and assessed this person last year, right? Yes, you asked them the same questions. Yes, you decided to give them X hours of care. Yes, has their condition improved? No, has their doctor said they’re able to care for themselves more independently than they used to know what justifies your decision for a cut? Right? There pretty compelling, right? There is no justification to fight the AI first, you have to get the first. You have to recognize that AI is being used. You have to get the code, if you can. And a lot of times, there’s serious fights around intellectual property, because companies try to keep it secret, and governments sometimes help them do that. You get the code. We it was 22 pages of single space computer code. I have no background in computer science. Science or statistics, we have to figure that out.
Joel Tirado 35:03
Use an AI to decode
Kevin De Liban 35:05
that might be the that would be their use case. This thing now sponsored by insert big tech company, though, and then you have to figure that out. You have to have a witness to tell the judge how that works, you have to have a witness to tell the judge how it works, as applied to your particular client, and then you have to have a witness potentially say why that is irrational or unreasonable in light of what we know right there, immense, immensely difficult. And I can tell you I lost so many cases where the judges in individual circumstances were like, Well, look, this is what the output of the AI was. I can’t disagree with that, right?
Joel Tirado 35:48
My pushback here is not at all on the on the types of things that you’re fighting for, for people who need someone in their corner. But just to ask, is it possible that the the legal framework just needs to be developed? It hasn’t caught up yet, and that this is the ugly part of of the development of law. It’s always slow and terrible until it gets refined, etc. To some
Kevin De Liban 36:20
extent, you’re right, but I think the thing is, so the part you’re right about is there is an absolute vacuum of accountability around this stuff. There’s no, or very limited political accountability. So officials who oversee the deployment of these projects that hurt poor people, if they don’t face they’re not going to get voted out for, you know, screwing with the lives of poor people, right? There’s limited market accountability, especially in the context of government AI, because there are relatively few vendors who all control and all offer bad products, or, you know, whatever. And there’s vendor lock, there’s all sorts of things with government, things that distort the market. Then you get into legal accountability mechanisms, and there’s very limited legal accountability for governments and for the vendors that are selling them these products. And so the thing that I would disagree with your take on is that it’s not that law is slow to develop. We know all the harms. We have been aware of, all these harms for a long time, in the ways that Mila points out that some of these dynamics are not new. It’s not that we don’t know what to do. It’s that big tech has money, and ever increasing sources of money and influence to make sure that the legislative process doesn’t happen in a way that is meaningful or that doesn’t meaningfully restrict what they’re allowed to do. And that’s where it’s not a question of innovation or time or anything else. It’s like y’all are comfortable enough to go make money off this. So why don’t you get to absorb the risk of what happens when your money making venture goes wrong?
Mila Gascó-Hernandez 37:47
And I would go back to the institutional context, right? So I teach this class where we talk about information policy, and therefore we talk about data privacy and data management, data governance, et cetera. And we compare three systems, the US, China and Europe, and how they understand data. And for example, we say, in Europe, data is a right. In the US, data is an economic asset. And therefore that lack of legislation about that lack of federal legislation about data is the result of something is the result of that institutional context, and that’s what allows these private companies to do what Kevin is describing. But again, I try to be optimistic or more positive, because we have other examples in other places where this doesn’t happen, or this doesn’t happen in such extreme, impactful way, so to say, and also, because I do think that this can’t be changed, and maybe this can start with small, incremental changes that eventually will result in a bigger change.
Joel Tirado 39:04
So how do we how do we move forward? You know, you work with governments, right? How do you advise them? You know, people like Kevin and other folks are making them aware of the harms. How do you advise them about how to proceed in developing in developing policies and programs that are reliant on AI systems that, let’s say, are less bad.
Mila Gascó-Hernandez 39:32
I want to say that it is my experience here in the US and I have extensively worked with governments in Europe and Latin America. My background is in Latin America, so I did a lot of work there in the US. I have mainly work with local governments and in New York state. So So I cannot really talk about what’s happening in other states. You know. Beyond our context. But I want to say that in what I have done in the past, many government directors, managers, employees are very well aware of these risks, and they’re trying to do something about it. We have had conversations, for example, with its in New York State, they are talking about risks as risk assessment. They are talking about piloting, before you know, deploying anything extensively to make sure that they control and they have those guardrails to make sure that the the impact is successful and the impact is good. So I think that what I’ve seen is that not all governments are bad, per se, and that there are people that are local governments, and there are agencies that are aware of the risks and are trying to do something about that, as I said, risk assessments is one of the tools they have piloting before deploying anything. Is another important thing that that they are doing, I think that another important thing that they’re doing is more and more trying to collaborate with other organizations. They come to UAlbany often and try to talk to us and get a different perspective of the things that they are doing. So so I think that there is willingness to make this happen in a responsible and ethical way, despite all the difficulties and challenges you
Joel Tirado 41:46
Thanks again to Mila Gascó-Hernandez and Kevin De Liban for this timely discussion of the role of AI in government and whether and how we can preserve public values as AI is increasingly integrated into government systems. If you liked this episode, please rate, subscribe, and share. It will help others find the podcast and help us deliver the latest in public policy research. All of our episodes are available for free wherever you stream your podcasts and transcripts are available on our website. I’m Joel Tirado; until next time.
Joel Tirado 47:29
Policy Outsider is presented by the Rockefeller Institute of Government, the public policy research arm of the State University of New York. The Institute conducts cutting-edge nonpartisan public policy research and analysis to inform lasting solutions to the challenges facing New York state and the nation. Learn more at rockinst.org or by following RockefellerInst. That’s i n s t on social media. Have a question comment or idea? Email us at [email protected].
“Policy Outsider” from the Rockefeller Institute of Government takes you outside the halls of power to understand how decisions of law and policy shape our everyday lives.
Listen to a full episode archive on Spotify, or subscribe on your preferred podcast platform.