I just finished reading Emily M. Bender's & Alex Hanna's book The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, and I come away with mixed feelings. They're mixed because they provide a comprehensive takedown of technology that I use (and that I'm asked to use) on a daily basis, but I'm not sure how much of it I buy at the moment.
What if LLMs Learn Relations Like Humans Do? I've been thinking about how behavioral psychology might explain AI capabilities. Here's my working hypothesis:
I think emergence in LLMs comes from relational diversity. In this case, relations are the concept of verbally connecting things (stimuli, events, concepts, etc) in some way. This typically takes the form of comparison or hierarchy or many others.
Effectively, we can think of relations as kind of a graph, relating concepts to one another.
This document lives in several places for accessibility,
GitHub Google Docs (for comments) My blog Introduction The rapid integration of advanced AI capabilities into everyday applications has brought significant improvements in efficiency and user experience. However, it has also introduced new security challenges that demand our attention. In this study, we examine the potential vulnerabilities in AI systems that combine language models with external tools, focusing on Retrieval-Augmented Generation (RAG) in customer support scenarios.