Adam Ventura’s presentation about AI and ABA at ONBARR 2025. Sounds futuristic, right? As someone who’s been on the front lines of DeFi and NFTs, I can tell you that hype often distracts us from reality. As proof of concept, this gives me a turn-by-turn case study developing skepticism. The question isn’t whether AI can be used in ABA, but whether it should be used, and how cautiously.
AI in ABA, A Shiny Distraction?
We're told AI can personalize learning, predict behavioral patterns, and free up therapists' time. Sounds amazing, doesn't it? Let's be real: are we genuinely innovating, or just automating existing, potentially flawed, systems? I see echoes of the early days of blockchain – everyone shouting about disruption without truly understanding the underlying mechanics or ethical implications. Are we sure we're not just slapping an “AI” label on things to make them seem more advanced, while overlooking the human element that's crucial in ABA?
Think about it: ABA is fundamentally about understanding and responding to individual needs. Can an algorithm really understand all the subtlety that is involved in how a child might feel? Can it really respond to unexpected situations in the way that only a human, trained psychotherapist can — with the same empathy, creativity, and intuition? Are we truly creating a system where we turn children into data points? In so doing, are we ceding control to cold, calculated algorithms to determine the parameters of their interventions?
Algorithmic Bias, A Real Danger?
As someone who works within the DeFi space, I have experienced how algorithmic bias can reinforce and accelerate known inequities. Even smart contracts written in a fair-minded way might accidentally codify discrimination or grant unfair advantages. Might the same be true for AI in ABA?
AI algorithms are trained on data. If this data is reflective of societal bias—underrepresenting some demographics or overrepresenting certain diagnostic categories—the AI will necessarily pick up on these biases too. This can result in misdiagnoses, inappropriate interventions, and in the end, perpetuate current disparities in access to quality care.
Who gets to determine what’s “normal” or “desirable” behavior anyway. If an AI is told to favor conformity, are we not creating hierarchies that give less power to the creative, the weird, the individualistic? Are we unintentionally setting up our future generation of children to be engineered to meet the standards of an algorithm? Instead, we need to give them every available opportunity to help them fulfill their promise.
The irony is, ONBARR 2025 actually touches on exactly that— trauma, identity and innovation. How can a technology like AI that is fundamentally pattern-matching really begin to solve for the nuances and intricacies of trauma as well as the complexities of one’s identity? It feels like a jarring juxtaposition.
Human Connection, Easily Replaceable?
After all, one of the biggest promises of AI is efficiency. Automate repetitive tasks, mitigate costs of inaction and scale up climate-ready services. In ABA, efficiency should not be the end goal. The therapeutic relationship, the human connection between therapist and child, is sacred.
Can AI truly replicate that connection? Can it offer the same depth of emotional support, encouragement, and intimate intellectual rapport? I doubt it. As a long-time teacher, I’m concerned that we might use artificial intelligence to strip the humanity from learning. It would risk making education a soulless and robotic endeavor.
Imagine this – An AI recommends an intervention that is technically appropriate. It ignores the emotional needs of the child and that their cultural background is different. A human therapist would at least be aware of this and adjust course as needed. An AI, operating at full throttle but without moral vision, might just barrel full-speed at the risk of significant collateral damage.
- Pros of AI in ABA (Potentially):
- Data Collection & Analysis
- Personalized Learning Paths
- Efficiency in Administrative Tasks
- Cons of AI in ABA (Real Concerns):
- Algorithmic Bias
- Dehumanization of Care
- Data Privacy & Security Risks
The risk of data breaches and privacy violations is the third big concern. We’re discussing some pretty red-hot info on kids, what they do, and where they’re headed developmentally. Are we really ready to put this data in the hands of AI systems, risk it being hacked, misused or worse yet creating unforeseen consequences?
The challenge to you, as a legislator, stakeholder, and supporter of ONBARR 2025, is to challenge your perspective and paradigm. I ask that you please take seriously my words to question the unquestioned acceptance of AI in ABA.
So, as you consider attending Adam Ventura's presentation, or perhaps even purchasing that $555 in-person ticket (don't forget the BENN20 discount!), ask yourself this crucial question: Are we building a future where AI empowers learners with autism and other developmental needs, or one where it reinforces existing inequalities and stifles their unique potential? The answer, I believe, is a conservatively optimistic and proactive, ethics-forward, human-centered approach.