“Logic says, if everything is learned from sensory input, then I'm allowed to learn it differently from you, because we have different experiences. But that’s not true. Many of the most important things in life we're not allowed to learn differently—for example, that the circumference of a circle is 2π(r). Things that we learn are accidental and immaterial. The stuff that matters, the stuff that makes the universe the way it is, are not learned. They are.” ... the top-down approach emblematic of symbolic AI is flawed because there is no foundational axiom (or agreed upon general principles) to work from—at least not when it comes to language and how our minds externalize thoughts as language. “When it comes to language and the mind, we have nothing,” Saba says. “Nobody knows anything about the mind and how the language of thought works. So anybody doing top-down is playing God.” https://lnkd.in/eyPR9p6g
Walid Saba’s Post
More Relevant Posts
-
In contrast, because of under specification, 2 different DL models trained on the same training data may learn 2 completely different functions none of which may correspond to the underlying physical data generating process. With higher model complexity, divergence in the model learning path grows, as there are more potential solutions and increasing under specification. With humans since the brain bootstraps from innate neural structure at birth, there is more common neural ground as we go through lifelong learning experiences. The rules for the innate neural structure is encoded in the genetic code, to which both parents contribute. Consequently there is commonality in what we learn going through life. Lot of human learning is also symbolic and instructional based on aggreed upon knowledge about the world, which adds to common shared knowledge. With transfer learning in DL, models fine tuned on a pre trained model have more in common. But we may be off to a wrong start, because the base pre trained model may have very little to the physical data generating process. #ai #deeplearning #brain #innate #under #specification #language #symbolic #subsymbolic #knowledge #learning
“Logic says, if everything is learned from sensory input, then I'm allowed to learn it differently from you, because we have different experiences. But that’s not true. Many of the most important things in life we're not allowed to learn differently—for example, that the circumference of a circle is 2π(r). Things that we learn are accidental and immaterial. The stuff that matters, the stuff that makes the universe the way it is, are not learned. They are.” ... the top-down approach emblematic of symbolic AI is flawed because there is no foundational axiom (or agreed upon general principles) to work from—at least not when it comes to language and how our minds externalize thoughts as language. “When it comes to language and the mind, we have nothing,” Saba says. “Nobody knows anything about the mind and how the language of thought works. So anybody doing top-down is playing God.” https://lnkd.in/eyPR9p6g
EAI Researcher: Are Language Models Missing Something? - Institute for Experiential AI
https://ai.northeastern.edu
To view or add a comment, sign in
-
“The underlying problem isn’t the AI. The problem is the limited nature of language. Once we abandon old assumptions about the connection between thought and language, it is clear that these systems are doomed to a shallow understanding that will never approximate the full-bodied thinking we see in humans. In short, despite being among the most impressive AI systems on the planet, these AI systems will never be much like us.”
AI And The Limits Of Language
https://www.noemamag.com
To view or add a comment, sign in
-
https://lnkd.in/gU6FJ443 There are lots of examples of science and technology where the empirical results don't quite line up with theory, and so we extend and correct our theory. There also lots of great examples of how amazing improvements to our lives are based on an accidental discovery (penicillin is a classic example). This article does a great job reminding us that the behavior of Large Language Models (and other large models) are perhaps as much a discovered as they are engineered, and how they work is not fully understood, even more so than the typical blackbox nature of Deep Learning-based AI systems. It's a system complex enough that emergent behavior is expected, but its discovered nature becomes really important as we think about the explainability, reproducibility, and safely implications of systems built on these technologies. To be clear, I am very bullish on the application of AI. Powering vehicles with a flammable and combustible liquid sounds crazy, but we do it because the benefit is worth the risk.
Large language models can do jaw-dropping things. But nobody knows exactly why.
technologyreview.com
To view or add a comment, sign in
-
👁️🗨️ "Qualia Control" in Large Language Models Mind, machine, and the curious capabilities of artificial intelligence. 🔵 LLMs stir debate on AI's potential for qualia, the subjective aspect of experiences. 🔵 Philosophical challenges in linking qualia with brain functions are highlighted through thought experiments. 🔵 LLMs' advanced architecture suggests a possibility for consciousness, inviting speculation on their capacity. 🔵 AI qualia raises ethical considerations and the need for more research on AI and consciousness. #AI #mind #AGI #LLMs #GenAI #consciousness
The World's Leading Leading Innovation Theorist in Technology, Artificial Intelligence and Medicine.
👁️🗨️ "Qualia Control" in Large Language Models Mind, machine, and the curious capabilities of artificial intelligence. 🔵 LLMs stir debate on AI's potential for qualia, the subjective aspect of experiences. 🔵 Philosophical challenges in linking qualia with brain functions are highlighted through thought experiments. 🔵 LLMs' advanced architecture suggests a possibility for consciousness, inviting speculation on their capacity. 🔵 AI qualia raises ethical considerations and the need for more research on AI and consciousness. #AI #mind #AGI #LLMs #GenAI #consciousness
"Qualia Control" in Large Language Models
psychologytoday.com
To view or add a comment, sign in
-
👉Anthropic has succeeded in interpreting the internal workings of AI (LLM)! Anthropic has succeeded in interpreting the internal workings of the Claude 3 Sonnet AI model, addressing concerns about AI being a black box. This breakthrough marks the first concrete interpretation of a commercial large-scale model. By mapping neuron activation patterns into millions of human-interpretable concepts, they created a visual conceptual map. This allows for a better understanding of the model's predictions and behaviors, enhancing safety by identifying potential harmful actions in advance.
Mapping the Mind of a Large Language Model
anthropic.com
To view or add a comment, sign in
-
The World's Leading Leading Innovation Theorist in Technology, Artificial Intelligence and Medicine.
👁️🗨️ "Qualia Control" in Large Language Models Mind, machine, and the curious capabilities of artificial intelligence. 🔵 LLMs stir debate on AI's potential for qualia, the subjective aspect of experiences. 🔵 Philosophical challenges in linking qualia with brain functions are highlighted through thought experiments. 🔵 LLMs' advanced architecture suggests a possibility for consciousness, inviting speculation on their capacity. 🔵 AI qualia raises ethical considerations and the need for more research on AI and consciousness. #AI #mind #AGI #LLMs #GenAI #consciousness
"Qualia Control" in Large Language Models
psychologytoday.com
To view or add a comment, sign in
-
𝗔 𝗴𝗿𝗼𝘂𝗻𝗱𝗯𝗿𝗲𝗮𝗸𝗶𝗻𝗴 𝗱𝗶𝘀𝗰𝗼𝘃𝗲𝗿𝘆 𝗶𝗻 𝗔𝗜 𝗶𝗻𝘁𝗲𝗿𝗽𝗿𝗲𝘁𝗮𝗯𝗶𝗹𝗶𝘁𝘆! Anthropic has just revealed a detailed map of millions of concepts within their large language model, Claude Sonnet. This is a leap in understanding how AI LLM process and represent information. 𝗪𝗵𝘆 𝗱𝗼𝗲𝘀 𝘁𝗵𝗶𝘀 𝗺𝗮𝘁𝘁𝗲𝗿? ➡️𝗨𝗻𝘃𝗲𝗶𝗹𝗶𝗻𝗴 𝘁𝗵𝗲 𝗕𝗹𝗮𝗰𝗸 𝗕𝗼𝘅: Uncover "black boxes." magic. ➡️ 𝗦𝗮𝗳𝗲𝘁𝘆 𝗙𝗶𝗿𝘀𝘁: Mitigate biases, ensure honesty, and prevent misuse. 𝗛𝗼𝘄 𝗱𝗶𝗱 𝘁𝗵𝗲𝘆 𝗱𝗼 𝗶𝘁? ➡️ 𝗗𝗶𝗰𝘁𝗶𝗼𝗻𝗮𝗿𝘆 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴: Anthropic employed a technique called "dictionary learning" to identify recurring patterns of neuron activations, representing concepts within the model. ➡️ 𝗦𝗰𝗮𝗹𝗶𝗻𝗴 𝗨𝗽: They overcame engineering challenges and scientific risks to apply this technique to a massive language model like Claude Sonnet. ➡️ 𝗠𝗮𝗻𝗶𝗽𝘂𝗹𝗮𝘁𝗶𝗻𝗴 𝗙𝗲𝗮𝘁𝘂𝗿𝗲𝘀: By amplifying or suppressing specific features, researchers observed changes in Claude's behavior, confirming the causal relationship between these features and the model's understanding of the world. This breakthrough by Anthropic is a significant step towards building more transparent, trustworthy, and safer AI systems. So, what are your thoughts on this development in AI? Engage below, share your insights, or like if you share our excitement for the future of AI! Let's keep the conversation flowing! Read the full paper here: https://lnkd.in/ejgZZV7A #ai #llm #anthropic #AiExponent
Mapping the Mind of a Large Language Model
anthropic.com
To view or add a comment, sign in
-
Technological Strategy Advisor; Google Developer Expert for ML - Generative AI; Google Accelerator Mentor & Sprints Master; Startup Founder
Excited to share my latest video on "Understanding RAG: Retrieval Augmented Generation in Al". In this video, I delve into the definition, and mechanics of RAG, exploring how it enhances the responses of Large Language Models (LLMs) through real-time information retrieval. I also discuss the concepts of embedding and vector databases. This video is perfect for AI enthusiasts and curious learners alike. Watch it now to gain deeper insights into this fascinating AI advancement. https://lnkd.in/ed79Nbam #AI #RAG #GDE #gemini
To view or add a comment, sign in
-
Anthropic has achieved a breakthrough in understanding large language models like Claude 3 Sonnet. They identified how millions of concepts are represented within the model, using dictionary learning to uncover patterns in neuron activation. These features represent a wide array of entities such as cities, people, and scientific fields, and can respond to both text and images in multiple languages. Notably, they also found abstract features related to concepts like inner conflict and logical inconsistencies. By manipulating these features, they could change Claude's behavior, such as making it repeatedly mention the Golden Gate Bridge. This discovery enhances our understanding of AI models and could improve their safety in the future. For more details, visit the articles on Anthropic's website and Transformer Circuits. https://lnkd.in/eAp4TybE #AI #Claude #Anthropic #blackbox
Mapping the Mind of a Large Language Model
anthropic.com
To view or add a comment, sign in
-
Delve into the fascinating world of generative AI with this insightful Forbes article "A Whale of a Tale: The Size-Matters Misconception For Generative AI" by Jason Mars, a professor of computer science at University of Michigan, President of Jaseci Labs. Learn why size isn't everything when it comes to unlocking the power of AI creativity. Discover the truth behind the 'size matters' misconception in AI development. Read full article here: https://lnkd.in/g33f73ba #AI #GenerativeAI #TechInsights #AIinnovation. #TechTrends #ArtificialIntelligence #TechDebates #SLaM #JaseciLabs
Author Post: A Whale of a Tale: The Size-Matters Misconception For Generative AI
forbes.com
To view or add a comment, sign in