Imagine this:
"An AI-driven child welfare system, built on the logic of “social investment,” predicts which children will become future burdens on the state—before they are even born. Trained on decades of biased data, the algorithm scans poverty markers, housing instability, and family histories, generating lifetime cost projections. A newborn’s fate is determined in seconds: Will they remain with their birth mother, or will they be removed to “maximize their life chances”?
Hospitals are the first line of intervention. AI screening tools scan medical records, financial histories, and even social media posts. A missed prenatal appointment, an eviction notice, or a stressed-out text message is enough to trigger an alert. If the risk score is too high, the mother never holds her child. A state-employed drone arrives with an automated court order, and the newborn is transferred to an “optimal” foster home—one selected for economic stability, genetic predispositions, and predictive success outcomes.
For those allowed to keep their children, surveillance is the price of retention. Parents are fitted with AI-driven wearables that monitor stress levels, speech patterns, and emotional regulation. Marketed as “preventative support,” these devices continuously transmit data to child protection servers, flagging fluctuations that suggest potential neglect. A spike in cortisol, an argument overheard by a smart speaker, a father working late too many nights in a row—each triggers a risk assessment.
Social workers, now relegated to reviewing algorithmic dashboards, issue corrective interventions without ever stepping foot in a home. Families with repeat warnings are assigned robotic “parenting assistants” to observe interactions and track improvements. If the AI detects continued “risk,” the system escalates—not through human judgment, but through an automated court that prioritizes statistical probabilities over personal testimony.
When removal is deemed necessary, it happens instantly. Drones arrive before social workers do. Parents’ wearables vibrate with an alert: Final intervention in progress. Compliance required. There is no appeal. The AI has spoken."
The above scenario was written — with a few tweaks by me — by ChatGPT. I asked it to envision a dystopian application of artificial intelligence to child welfare, with special focus on how this system could be used to dismantle low-income families with hyper-efficiency. It did a wonderful (and perverse) job.
For now, this is science fiction. However, science fiction allows us to see what’s possible, and while the extreme future outlined above might never come to pass, we should still be be wary of this technology as it becomes increasingly intertwined in systems designed to protect society’s most vulnerable members. The question is not whether AI will be used in these spaces (to an extent it has been used for some time), but how—and who gets to decide.
That’s what I will be covering today, in this newsletter. Given that folks in the know have been increasingly claiming that the arrival of Artificial General Intelligence (AGI) is imminent — AGI referring to “the hypothetical intelligence of a machine that possesses the ability to understand or learn any intellectual task that a human being can” — it is imperative that all of us humans start talking about this as much as humanly possible.
Don’t get me wrong: child welfare professionals have been talking about this for years. Chapin Hall, UNICEF, Child Trends, and a slew of academics have all written thoughtfully about this topic over the years. You should read their work — and support it too — if you want to learn more. I am just adding my own unique perspective to the subject, with the hope that folks who aren’t thinking about what’s on the horizon can start doing so now.
Plus, in the survey I released earlier this year, several of you requested I write about this topic, and so, that is precisely what I shall do. This newsletter is meant to serve as a basic overview of the topic; future newsletters will dive a bit deeper into specific concepts or practices.
Ok, let’s begin!

The Here and Now: How AI and Algorithms Are Used in Child Welfare:
Perhaps the best place to start is where AI has been used or is currently in use in child welfare right now. Given the space concerns, I’ll focus on just one contentious application of this technology: predictive analytics.
First, for the uninitiated, predictive analytics is precisely what it sounds like: “the process of using data to forecast future outcomes.” This approach has been used in child welfare decades, with some states and jurisdictions experimenting with predictive algorithms for decision-making as early as the 2000s. Ostensibly, this should be an excellent use of technology. Imagine being able to analyze information from various data sources — courts, law enforcement, hospitals, child protective services, and so on — to identify patterns and predict whether a child is at risk of abuse or neglect. If tech companies are using it to sell us stuff or to keep us doom-scrolling on social media, why not use the very same technology to protect children?
But, alas, it isn’t so simple, for there have been several unfortunate instances where this technology, rather than protecting children, has functioned as an Orwellian surveillance tool that subjected families to unnecessary scrutiny. California and Illinois, pioneers in the use of this technology, had to shutter this technology after being plagued by high rates of false positives, leading to multiple families being investigated unnecessarily.
The most well known — or perhaps notorious — model is the Allegheny Family Screening Tool (AFST). It is a “predictive risk modeling tool that rapidly integrates and analyzes hundreds of data elements for each person involved in an allegation of child maltreatment.” From this data a score is generated that “predicts the long-term likelihood of future involvement in child welfare.” If you squint, this looks a little bit like the dystopian scenario I outlined in the introduction. Heck, you don’t need to squint that much, because the AFST has been subjected to a litany of critiques:
It has been investigated by the Department of Justice over concerns that it violated the civil rights of people with disabilities.
Virginia Eubanks — an investigative journalist and professor at the University at Albany, SUNY — dedicated a lengthy chapter to the tool in her book Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. In that chapter she argues that much of what the tool identifies as problematic parenting could really just be parenting-while-poor.
Some researchers have criticized the AFST for misrepresenting risk by presenting a ‘risk score’ that does not indicate the absolute probability a child would be removed but rather a relative comparison to past cases. While this distinction might seem subtle, it can have an outsized impact on families who are subjected to this opaque model.
The ACLU argues that the AFST’s removal tool is partially predicated on the belief that families can never change, as a stay in the Allegheny County jail (at any time and for any reason) is factored into the model. As such, the ASFT “effectively offers families no way to escape their pasts, compounding the impacts of systemic bias in the criminal legal system.”
There are more criticisms, but perhaps the most salient for me personally is how families themselves interact with the system that is ‘supercharged’ by these technologies. The child protection system, I know from experience, is already complex and terrifying for low-income families. As a kid I came to believe that CPS was hiding behind every corner and stalking my family’s every move. But I engaged with CPS before these systems came online. I can imagine now that this paranoia is ratcheted up even further, where you have the feeling of being watched at all times by a form of ‘intelligence’ that is allegedly unbiased but also ubiquitous.
The above, however, is based on the notion that families are even aware that these tools exist. In 2022, the Associated Press — in an article that I highly recommend you read yourself — reported that many families aren’t even aware that these tools are being used, and in some cases, families and attorneys are unable to know if this algorithm had played a part in their cases because “they aren’t allowed to know the scores.” One attorney interviewed in the piece recalled that a “judge demanded to know a family’s score, but the county resisted, claiming it didn’t want to influence the legal proceeding with the numbers spat out by the algorithm.” Heck, the tool had a technical glitch for two years that gave social workers the wrong scores, which either underestimated or overestimated a child’s risk.
Imagine you get a knock at your door by a caseworker who will be conducting an investigation into you because some computer said, essentially, that you were a bad parent. Imagine going through this lengthy investigation, and hopefully being cleared of any wrongdoing, only to read later that the computer had an issue and you were accidentally flagged. If you’ve ever been on the wrong end of a technical issue (I know I have with the VA), you know how frustrating it is to have your life held hostage by bits and bytes. Now imagine your kids could be possibly taken away as a consequence of a glitch. There’s something profoundly undemocratic, profoundly terrifying about this process, to say in the least.
This isn’t to say there is no room for tools of this nature. For example, some of have argued that they can provide a ‘scientific check’ on caseworkers’ personal biases, though empirical studies on the subject are decidedly mixed. I am typically of the belief that policymakers should be allowed to experiment with evidence-based policies, and if necessary, fail, because it is this process that is most conducive to innovation. But the stakes are high, and high stakes leaves little margin for error. These tools must be absolutely unimpeachable (and radically transparent) if they are going to be used in assessing families.
Other Applications of AI in Child Welfare
It seems these days that everybody has an AI pitch. Peruse LinkedIn for five minutes and you’ll find someone writing about “an AI-powered job searching tool” or “an AI-powered business-to-business sales platform.” If it exists, someone has likely claimed that it should be AI-powered. Child welfare is no exception. Below are just a couple examples I found after 15-20 minutes of googling:
An “artificial intelligence-powered too,” inspired by online dating has been devised to help facilitate adoptions. Family-Match, the tool is called, uses a proprietary algorithm to help purports to predict which adopted families will likely stay together. An AP investigation found that this tool left much to be desired.
Propel, a technology company, is developing AI-powered tools “aimed at improving navigation of the safety net, which includes government programs” such as SNAP (food stamps). This could vitally important, as a key concern with the social safety net are take-up rates: how many eligible families are accessing government programs? SNAP — as I’ve written about before — plays an outsized impact at keeping kids healthy and keeping families together, so I consider this up to a fantastic use of AI!
First Place for Youth, a non-profit, has developed a “proprietary recommendation engine” called the Youth Roadmap Tool (YRT). This tool uses “precision analytics to analyze program data and learn from outcome differences among our transition age (18-21) foster youth,” with the ultimate goal of assessing what blend of supports will help kids make the leap to independence.
This is just the start, folks. I can imagine we’ll be seeing an exponential growth in AI being applied to the social service sector over the next few years. Now, with that aside, let me add my two cents to all this.
My Two Cents:
When I separated from the US Navy, my first job was with an organization called Singularity University, in Silicon Valley. I was an iLab Technology Guide, meaning I conducted product demonstrations of ‘exponential technologies’ — AI, augmented reality, virtual reality, 3D printing, autonomous vehicles and more — for high-level business executives and government officials seeking to leverage this tech to solve humanity’s grand challenges. This was before the rise and rapid diffusion of the chatbots, such as ChatGPT and Claude.
I loved this job for many reasons, but one primary reason was because I was a true believer: I thought technology could indeed be used to solve the challenges plaguing mankind. My belief has flagged in recent years I have watched this altruistic, triumphalist language — helping humanity, solving problems, etc. — have been wielded to do the same exact thing that corporations have been doing long before microchips and bytes entered humanity’s lexicon: seek profits above all, prioritize, crush competition and corner markets, and entrench existing inequalities under the guise of innovation.
I’ve watched as technology—once heralded as a force for truth—became a megaphone for disinformation, as social platforms optimized for engagement fueled polarization, and as AI-powered scams preyed on the most vulnerable. Tools that could have democratized knowledge have instead been used to manipulate, surveil, and extract.
But despite all this, I am still optimistic, if only because when there are good people doing good work, there is always hope. What can we hope for? How can this tech be used to improve the lives of foster youth? Well, let me hearken back to my own time in the system, and openly ruminate about the ways that AI could’ve possibly helped me.
As I’ve written about before, several of my caregivers stole from me. That is, the state of California allocated funds to be spent on my clothing, and my foster parents pocketed that money. I didn’t report this as I feared retaliation — I’d much rather be out a few hundred bucks a month than having to switch schools and move to a different house. My foster parents would fork over receipts to my social workers, and my social workers would accept them, with no questions asked.
But what if AI had been there—not as a tool for surveillance, but as a safeguard? Imagine if an automated system cross-checked receipts and flagged inconsistencies, like a foster parent claiming purchases from a store that doesn’t sell clothes or submitting a receipt with items that were clearly never meant for a teenage boy—women’s handbags, scented candles, baby clothes. If you think such tech isn’t needed, I saw some of the receipts my foster parents handed over to my social worker, and guess what was on them: women’s handbags, scented candles, baby clothes.
You might say that the above could be rectified the old fashion way: my social worker using their eyeballs to see when blatant fraud is being committed. But my social workers were often overwhelmed themselves, and they changed so frequently that there were times where I’d get different caseworkers in successive weeks. This, occasionally, bore out in the reports submitted to the court, where information would slip through that was hilariously off-the-mark. Sometimes another child altogether would be referenced. This isn’t their fault, I should say, but a systemic issue: social worker shortages, caseworker turnover, and crushing caseloads have long been unfortunate features of the child welfare system.
What if we can leverage technology to alleviate the burden on these caseworkers? Rather than being slammed with paperwork, AI can help automate the routine, administrative work that prevents social workers from doing what brought them to the field to begin with: working with families and helping kids. I am positive there are folks working on this technology now, and if so, I hope the work is being shaped by social worker and former foster youth themselves.
Or, heck, what about aging out of the system? I remember that time well: the uncertainty, the existential dread, the feeling that I wasn’t ready, that the streets were waiting for me. I felt — at the time — that I was about to be smacked on the behind and wished good luck as the system ushered me out the door. I can imagine had a “one-stop-shop” type technology been available to me at the time — where an AI-powered assistant could help me navigate housing options, track down vital documents, check the right boxes for college applications, and connect me with mentors and support networks — I’d’ve felt a heckuva lot more confident about my options.
I could keep going, but you get the picture: artificial intelligence can indeed do a lot of good for folks, if properly applied. But, just to be clear, let me conclude this section on this note: the system will not be saved by technology.
While human failures are rife within the foster care, the system itself is broken. It has been starved of resources yet used as a ‘solution for all of society’s problems: poverty, drug epidemics, criminal justice, and so on. Artificial intelligence can certainly make a lot of things better if deployed correctly, but the system will only be saved when policymakers empower humans do to what they do best: protect, love, and care for children and families.
One Possible Future, If We Work For It:
I opened up this newsletter with a dystopian vision of a child welfare system driven by artificial intelligence. Let me close this newsletter with a utopian vision, provided by ChatGPT, so I leave you folks on a positive note. Imagine:
"An AI-driven child welfare system designed not to separate families, but to strengthen them. Instead of predicting which children will “fail” and removing them preemptively, the system identifies which families need support—before a crisis forces intervention.
Hospitals, schools, and social service agencies use AI-powered tools to connect families to resources, not surveillance. A single mother struggling with rent? The system flags her for early housing assistance, not child removal. A father juggling multiple jobs? AI automatically pre-fills applications for food assistance and childcare subsidies, ensuring no family is forced apart due to poverty.
Caseworkers—often overwhelmed with caseloads too large to manage—are no longer bogged down by endless paperwork. AI-assisted case management tools summarize reports, suggest best practices based on similar cases, and streamline communication across agencies. This means caseworkers spend less time on data entry and more time where they are needed most: in homes, with families. Instead of acting as simply cogs within a broken system, they become partners in keeping families together.
For children in care, AI personalizes reunification plans, ensuring that every parent receives the specific services they need—whether that’s addiction treatment, job training, or mental health support. The system continuously updates, adjusting recommendations as parents make progress, preventing unnecessary delays in reunification.
For those aging out of foster care, an AI-powered lifelong support network ensures they don’t face adulthood alone. It proactively connects former foster youth to scholarships, job training, and mental health services, reminding them of deadlines, matching them with mentors, and ensuring they never slip through the cracks. AI-driven chatbots provide 24/7 assistance, helping with everything from finding housing to navigating bureaucratic systems.
Most importantly, this system is built on a fundamental belief: poverty is not neglect. AI doesn’t police families—it empowers them. It doesn’t separate—it supports. And in doing so, it redefines child welfare not as a last resort, but as a force for keeping families whole."
There is much else I wanted to write here, as AI has a litany of implications for child welfare beyond what was written above. But I know I will be returning to this subject again and again, because AI isn’t going away anytime soon.
Thank you for reading, and we’ll see you in a few weeks!
Current Read(s):
For this week and the last several, I’ve been slowly working my way through A Theory of Justice by John Rawls, a dense (and at times meandering) work of political and moral philosophy. I’ve read about Rawls and his theories for quite some time. One theory that I’ve been particularly interested in is his concept of the “original position.” Or, the “veil of ignorance.” Put simply, Rawls argues that in order for us to objectively construct a just society, we must act as if we would have no knowledge about what position we would end up having in such a society. We must pretend we exist behind a “veil of ignorance,” meaning we won’t know our class, race, gender, ethnicity, sex, and so on.
So, having had my interest piqued, I thought I’d learn bit more about Rawls and his philosophy by reading his seminal work. Wish me luck!
What’s Going On In the World of Child Welfare?:
Virginia’s Custody Crisis: Why Are Parents Giving Up Their Kids and Solutions to Help Families (WSLS 10) — A practice called ‘relief of custody’ has skyrocketed in the last few years, with hundreds of cases of parents voluntary surrendering their kids popping up in the past five years.
As Liability Insurance Runs Out, Crisis Looms for State’s Private Foster Care Agencies (Chicago Sun-Times) — An issue with liability insurance is threatening to upend the foster care system in Illinois.
Texas Senate Bill 620 Would Emphasize Family Preservation in Child Welfare Cases (TCU 360) — Texas is turning over a leaf, at least if this bill gets enacted and it is effective at keeping kids safely with their families.
27 Colorado Kids Ran From Foster Care and Treatment Centers in One Year. Now Lawmakers are Talking About A Fence (Colorado Sun) — To prevent foster kids from running, Colorado lawmakers are trying to change regulations to allow fences around state-owned residential treatment centers.
New Bill Would Provide Extra Support for Utah Foster Children (Deseret) — New legislation in Utah will allow the state to apply, on behalf of children in care, for Medicaid and other federal benefits, while also providing youth with financial literacy training.
Support Don’t Report, Urge California Bills Focused on Struggling Families (The Imprint) — California is trying to forge alternative pathways for families in need, where they can be provided support when in need of help, rather than being reported to CPS.
Missouri Senate Passes Child Welfare Overhaul (Governing) — The Missouri senate passed legislation covering residential care centers and tax credits for youth programs.
Bill Would Give Teens Aging Out of Foster Care $2K a Month Despite Potential Convictions (KRQE) — Some folks are upset that a program providing foster youth aging out of the system with $2K a month includes former foster youth who have gotten into trouble with the law.
Teen in KY Foster Care Office Causes $27K in Damage. Auditor Says It’s A ‘Systemic Failure’ (Lexington Herald Leader) — Kids are being temporarily being housed in social service offices in Kentucky, precisely where children should not be placed.
New Legislation in New York Supports Foster Children’s Right to ‘Dignified Transportation’ (Next City) — Great news! I’d love to see Congress take this up though, so states don’t have to do this one at a time. In an age where folks can’t agree on anything, I can imagine like 90% of America would say foster kids shouldn’t be forced to stuff their belongings in trash bags when they need to move homes.