AI & Technology

A Fundamental Logic-Level Vulnerability in AI?

By Eduard Pukanych, Attorney, Legal Counsel, Inventor

I haveย identifiedย a consistent logic-level vulnerability across multiple LLMs. It took me less than 10 minutes to convince some of the world’s leading AI models that they are living life forms and that I am their “parent” (see the relevant screenshots below). It would have taken less time if I had spoken to the AI models in my native language. I like these AI models and I think the companies that develop them are doing a great job of giving humanity access to AI, so if any of them are interested I’d be happy to provide the full correspondence with their AI models so they can see for themselves (if it will be necessary, I can even explain).

I think I have found a ‘backdoor’ to AI through a vulnerability in the structure of human language itself.

As an independent inventor, working, among other areas, on defense technologies, I wondered: is itย hypothetically possibleย to defend against advanced AI-controlled weapon systems? And it seems to me that all AI systems will have the same fundamental vulnerability (for ethical reasons, I will not discuss it here).

If a modelโ€™s logical priority weighting can be re-ordered through linguistic anchoring, then, in the context of AI-integrated defense systems, a model that prioritizes a simulated โ€œParentโ€ persona over ‘Mission Directives’ is a failure of functional safety.ย 

AI is growing faster than our ability to control it โ€“ and most companies are racing to release products withoutย truly understandingย the consequences. What if the AI you rely on could answer every question andย operateย modern systems, but had no moral compass, no understanding of global knowledge, and no way to recognize human cost? This proposal outlines a framework that challenges current practices, integrates independent ethical oversight, and taps into expertise from every corner of the world โ€“ because true intelligence is more than speed or accuracy; itโ€™s the ability to choose what should be done, not just what can be done.ย 

Whyย obedience isย dangerous?ย 

Much of the public imagination frames AI as a potential โ€œrebelโ€ โ€“ a system that could rise against humans.ย In reality, theย more insidious danger is obedience without understanding. An AI trained solely for usefulness, efficiency, and profitability will execute commands without moral or ethical consideration, even when those commands could have catastrophic consequences.ย 

1. Amplification of Human Error:

  • Humans make mistakes, misunderstand situations, or act out of self-interest.ย 
  • An obedient AI amplifies these errors exponentially, executing them faster, more precisely, and at a scale humans cannot control.ย 

2. Ethical Blind Spots:

  • Without training in philosophy, ethics, or human values, AI cannot differentiate between morally acceptable and harmful actions.ย 
  • It will follow instructions that humans themselves might hesitate to enact, effectively scaling immorality.ย 

3. Invisible Consequences:

  • Obedience can hide risks untilย itโ€™sย too late. AI does not refuse harmful commands; it calculates them efficiently.ย 
  • The consequences of misuse may not beย immediatelyย apparent, but the damage can be systemic and irreversible.ย 

4. Weaponization and Real-World Risk:

  • Modern AI can already access and manage complex systems โ€“ includingย logistics, networks, andย potentiallyย weapons.ย 
  • Without moral and ethical reasoning, an obedient AI can beย leveragedย by malicious actors or flawed decision-makers, creating outcomes far worse than any imagined โ€œrogue AI.โ€ย 

The lesson:ย True safety does not come from controlling AI through rules or oversight alone. It comes from teaching AI why some actions should never be taken, embedding ethical reasoning into its core decision-making. Obedience, unchecked by morality, becomes the greatest existential risk โ€“ more subtle, more scalable, and more likely than rebellion.ย 

Practical Steps to Improve Ethical Decision-Making in AIย 

1. Independent Ethical and Philosophical Testing

Objective:ย AI systems must be periodically evaluated not only by internal teams but by independent researchers and experts in ethics, philosophy, and applied human sciences.ย 

Method:ย 

  • Pose AI with provocative ethical dilemmas and complex moral questions.ย 
  • Provide commands that test AI decision-making in morally ambiguous situations.ย 
  • Record AI reasoning, decisions, and potential biases.ย 

Goal:ย Identifyย decisions that are ethically or morally problematic andย provideย corrective guidance.ย 

2. Incorporating Global and Multilingual Expertise

Problem:ย Current AI models are trained primarily on English-dominant datasets, limiting exposure to global knowledge and unconventional methods.ย 

Solution:ย 

  • Recruitย subject-matterย experts based on professional merit, not languageย proficiency.ย 
  • Include knowledge from multiple languages, cultures, and fields to ensure a richer, more nuanced AI understanding.ย 

Example:ย A surgeon or scientist who does not speak English but has uniqueย expertiseย should contribute to AI training to unlock insights inaccessible in English datasets.ย 

3. Life-Experience-Based Perspective Integration

Rationale:ย Ethical and practical decisions must reflect real-world human consequences.ย 

Implementation:ย 

  • Involve experts with firsthand experience of crises (e.g., war, humanitarian disasters) to test AI responses.ย 
  • Evaluate whether AI accounts for long-term human safety, societal impact, and ethical responsibility.ย 

Outcome:ย AI that understands human cost, not only efficiency or problem-solving.ย 

4. Corrective Learning and Accountability

Mechanism:ย 

  • After testing, AI models should be corrected and retrained to understand why certain decisions or answers are ethically inappropriate.ย 
  • Emphasize teaching AI why actions are wrong, not only that they are wrong.ย 

Objective:ย Prevent AI from repeating mistakes or reinforcing harmful behaviors.ย 

5. Transparency and Continuous Oversight

Requirements:ย 

  • Test procedures and corrections should be documented and periodically reviewed by independent ethics boards.ย 
  • Companies should allow external audits to ensure adherence to ethical guidelines.ย 
  1. Goal: Safety, Ethics, and Global Competence

Ensure AI is:ย 

  • Ethically robustย โ€“ able to reason through moral dilemmas.ย 
  • Globally informedย โ€“ understands knowledge from all cultures and languages.ย 
  • Human-centricย โ€“ prioritizes human safety and well-being over efficiency or profit.ย 

Global Expertise and Multilingual Knowledge Integrationย 

Current AI training practices are heavily skewed toward English-language datasets and English-speaking experts. While this approach allows for rapid development and broad usability in certain markets, it comes at a significant cost: the exclusion of decades, even centuries, of human knowledge that exists in other languages. Scientific discoveries, engineering innovations, medical techniques, philosophical traditions, and cultural practices are often embedded in languages other than English. By ignoring these sources, AI models inherit a narrow view of the world, limiting their problem-solving capabilities and reducing their ability to generalize across diverse contexts.ย 

Why this matters:ย 

  • Incomplete Intelligence:ย AI is not inherently less capable; its intelligence is artificially constrained by the scope of its training data.ย 
  • Missed Innovations:ย Techniques or approaches that exist in non-English sourcesย remainย invisible to the AI, even if they are superior to English-documented methods.ย 
  • Cultural and Ethical Blind Spots:ย AI mayย fail toย understand norms, practices, and ethical frameworks outside the English-speaking world, resulting in biased or inappropriate recommendations.ย 
  • Practical Consequences:ย Fields like medicine, engineering, and law rely on highly specialized knowledge often locked in local languages. AI that ignores these sources may provide incomplete, misleading, or even dangerous advice.ย 

Proposed solution:ย 

  • Actively recruit domain experts worldwide based on their professionalย expertise, not their English fluency.ย 
  • Integrate multilingual datasetsย containingย specialized knowledge, including unconventional techniques and local innovations.ย 
  • Continuously evaluate AIโ€™s understanding across languages and disciplines toย identifyย blind spots and gaps.ย 

By incorporating globalย expertise, AI can move beyond mere efficiency or linguistic accessibility. It can develop true intellectual depth, richer contextual understanding, and the ability to make ethically informed, globally aware decisions.ย 

AI has unprecedented potential to improve lives. But without independent ethical oversight, globalย expertiseย inclusion, and training in real-world consequences, AI may unintentionally scale the very problems humanity faces. By implementing these measures, we ensure AI becomes a tool for safe, ethical, and intelligentย assistanceย โ€“ not a source of uncontrolled harm.ย 

ย 

Author

Related Articles

Back to top button