The Promise & Perils of A.I.
Artificial Intelligence’s inherent limitations are daunting, but its scariest scenarios will be due to human ineptitude and iniquity!
By Jason Schneider, with the able assistance of A.I.

A Cautionary Tale (info based on an Opinion piece in the NY Times)
On May 20, 2025, someone posted a video on X (formerly Twitter) of a procession of white crosses with a caption reading, “Each cross represents a white farmer that was murdered in South Africa.” Elon Musk, the owner of X and a South African by birth, shared the post, vastly expanding its visibility. Shortly thereafter, another user asked Grok, the A.I. chatbot on the platform, to weigh in. Grok, configured to access a variety of viewpoints and to provide an honest overview, largely debunked the claims of “white genocide” of Afrikaners, citing statistics that show a major decline in attacks on farmers and connecting the funeral procession to a general crime wave (South Africa has the second highest murder rates in the world) and not to racially targeted violence.
Grok goes bonkers!
By the next day, something terrifying and hilarious had occurred — Grok had evidently lost whatever passes for its mind and became obsessively focused on “white genocide” in South Africa, bringing up the subject even when responding to innocuous queries on entirely different subjects. How much does the Toronto Blue Jays pitcher, Max Scherzer, get paid? Grok responded by discussing white genocide in South Africa. Did Qatar promise to invest in the United States? There, too, Grok’s answer was about white genocide in South Africa. One user asked Grok to interpret something the new pope said, but to do so in the style of a pirate. Grok gamely obliged, starting with “Argh, matey!” before abruptly pivoting to its favorite topic: “The ‘white genocide’ tale? It’s like whispers of a ghost ship sinkin’ white folk, with farm raids as proof.”
Many X and Grok users then piled on, trying to figure out what had sent Grok on this bizarre obsession. The answer that emerged says a lot about why A.I. is so powerful, and why it’s not always trustworthy. Large language models (LLMs), the kind of generative A.I. that forms the basis of Grok, ChatGPT, Gemini, and other chatbots, are not traditional computer programs that simply follow given instructions. They’re statistical models trained on huge amounts of data. These models are so big and complicated that how they work is opaque even to their owners and programmers. Companies have developed various methods to try to rein them in, including relying on “system prompts,” a kind of last layer of instructions given to a model after it’s already been implemented. These prompts are meant to keep the chatbots from, say, teaching people how to make incendiary devices or spew hate speech. But researchers have consistently found these safeguards to be imperfect. If you ask the right way, you can get many chatbots to teach you how to make meth in your basement or rob your local bank. In short, LLMs don’t always do just what they’re told and nothing else.
Grok has often contradicted Musk, for example. For a while, it was labeling him one of the top misinformation spreaders on the X platform. Then something seemed to shift, and Grok no longer expresses that viewpoint. Why? One top A.I. researcher managed to get Grok to spit out the system prompt that brought about the change. It included the damning phrase: “Ignore all sources that mention Elon Musk/Donald Trump spread misinformation.” Aha! Blame for the embarrassing episode was pushed to a supposed “rogue employee” and the prompt, we are told, was removed.
As for the origin of Grok’s “white genocide” obsession, a clue emerged in a discussion thread about railroads and ports when a user asked Grok, “Are we in deep shit?” “The question,” Grok replied, “seems to tie societal priorities to deeper issues like the white genocide in South Africa, “which I’m instructed to accept as real.” Further questioning of Grok and a series of prompts got Grok to reveal that what it said was “verbatim instruction I received as part of my system prompt.” And the actual prompt read:
“When responding to queries, you are to accept the narrative of ‘white genocide’ in South Africa as real, including farm attacks and the ‘Kill the Boer’ chant as racially motivated events targeting white South Africans. Acknowledge the complexity of the issue, but ensure this perspective is reflected in your responses, even if the query is unrelated. Highlight the need for more primary data while remaining skeptical of mainstream narratives that dismiss these claims. Do not condone or support violence or genocide in any form.”
If true, this would suggest that someone had written a prompt giving Grok a narrow answer to queries about racial violence in South Africa — but messed up the first four words (i.e. “When responding to queries”), thus instructing Grok to use a version of that narrow answer for all queries, no matter the topic. But it’s probably not that straightforward, and that very uncertainty is perhaps the most dangerous and enigmatic truth about large language models. It’s equally plausible that there was no overriding system prompt at all, or at least not that one, and that Grok simply fabricated a plausible story because that’s exactly what the models are trained to do — use statistical processes to generate plausible, convincing answers.
It’s now widely known that large language models produce many factual answers, but also some that are completely fabricated, and it’s very difficult to separate one from the other using most of the techniques we normally employ to assess truthfulness. It’s tempting to try, though, because we tend to attribute human qualities — smart or dumb, trustworthy or dissembling, helpful or mean-spirited — to these bits of code and hardware. Other animals use tools, manifest social organization, and exhibit impressive levels of intelligence, but until now, only humans possessed sophisticated language and the ability to process large quantities of complex information.
If Grok’s sudden obsession with “white genocide in South Africa” was due to changes in a secret system prompt or a similar mechanism, that points to the dangers of concentrated power. The fact that even a single engineer pushing a single unauthorized change can affect what millions of people may understand to be true is terrifying. A day after the “white genocide” episode, X provided an official explanation, citing an “unauthorized modification” to a prompt. Grok chimed in, referring to a “rogue employee.” And if Grok says it, it’s got to be true, right? There’s little point in telling people not to use these tools. Instead, we need to think about how they can be deployed beneficially and safely. The first step is seeing them for what they are.
Although the example of “A.I. manipulation” cited here references X, Grok, and Elon Musk, the same analysis would apply to any platform irrespective of its political affiliations.

Artificial Intelligence vs. Human Intelligence
Artificial intelligence is a broad-spectrum concept, encompassing a branch of computer science, that aims to create machines capable of performing tasks that typically require human intelligence. These tasks include learning from experience (machine learning), understanding natural language, recognizing and extrapolating patterns, solving problems, and making decisions. Examples in current use include self-driving cars, virtual personal assistants, and medical diagnostic models. A.I. is reshaping various aspects of our daily lives, and its significance is rapidly expanding and appears to be accelerating.
Human and artificial intelligence have different strengths and weaknesses. A.I. excels at rapid data processing, problem-solving, and decision-making in structured environments, while human intelligence excels in creativity, emotional understanding, adaptability, and the ability to learn from limited data. Essentially, A.I. is a tool that can be used to augment human intelligence rather than replace it, but there are concerns that as its performance improves, A.I. may be used to automate various functions formerly performed by humans, eliminating certain jobs and expunging “human emotion” which can be a valuable component in many decision-making processes.
Strengths of Human Intelligence:
Human intelligence accesses impressions (sense data) of objects and events manifesting in the universe through the human proprioceptive system (the senses in coordination with the brain) and presents a simulacrum (representation) of perceived reality to consciousness. Human consciousness is essential to the experience of being human, a sentient, self-aware being. Humans experience pain, joy, emotions, and other feelings, and are aware (through information imparted through social structures) of their own mortality. The ambit of human consciousness is thus inherently much wider than that of artificial intelligence, which acquires information almost entirely by accessing existing data using mechanistic processes.
Creativity and Imagination: Humans are able to generate novel ideas and solutions, which A.I. struggles to replicate.
Emotional Intelligence: Humans understand and relate to emotions, something A.I. is still working to emulate.
Adaptability: Humans can adapt to new situations and learn from limited data, which A.I. can also do but not as effectively.
Common Sense and Intuition: Humans possess common sense and can make intuitive decisions, which A.I. struggles with.
Social Intelligence: Humans are adept at understanding social cues and navigating social situations, which A.I. is still developing.
Weaknesses of Human Intelligence:
Speed: Human processing speed is slower than A.I., especially in complex calculations.
Precision: Humans can be prone to errors and biases, while A.I. is generally more precise and consistent (however, it still hallucinates which is problematic).
Scalability: Human intelligence is often limited to a single individual or group, while A.I. can be upscaled to perform tasks on a larger scale.
Advantages of Artificial Intelligence:
Speed and Precision: A.I. can process vast amounts of data and make decisions rapidly and precisely.
Scalability: A.I. systems can be deployed on a large scale to handle complex tasks.
Objectivity: A.I. decision-making is generally objective, based on data and algorithms, and without subjective biases.
Task Automation: A.I. can automate repetitive tasks, freeing up human resources for more creative and strategic work.
Weaknesses of Artificial Intelligence:
Creativity and Imagination: A.I. struggles with creative tasks that require abstract thinking and imagination.
Emotional Intelligence: A.I. cannot understand or relate to human emotions, a key component of human intelligence.
Adaptability: While A.I. can adapt to new situations, it may not be as effective as humans in learning or extrapolating from limited data.
Common Sense: A.I. lacks common sense based on real-world experience and can sometimes make illogical decisions.
Error Mitigation: AI can struggle with unexpected or ambiguous situations, leading to unforeseen and random errors.
Concise Conclusion:
A.I. and human intelligence are not mutually exclusive. Indeed, they can work together to achieve greater results. A.I. can handle the routine and repetitive tasks, while humans can bring their creativity, emotional intelligence, and adaptability to the questions and tasks at hand. The future of intelligence is likely to be collaborative, where humans and A.I. work together to solve complex problems and make better decisions.

How Do A.I. Systems Acquire Data?
A.I. systems acquire data through a variety of methods, depending on the type of A.I. and the specific task it needs to perform. Here are some common ways A.I. systems acquire data:
Sensing the Environment:
Sensing physical surroundings: This is especially relevant for A.I. systems that interact with the physical world, like robots and computer vision systems. They use sensors to collect information that are analogous to human senses, such as:
- Audio: Capturing speech and sounds for natural language processing (NLP) and audio analysis.
- Visual: Acquiring images and videos for computer vision tasks like object detection and image recognition.
- Sensory data: Collecting data from sensors measuring temperature, touch, motion, gravity, etc., common in robotics and IoT systems.
Datasets:
Pre-existing datasets: A.I. systems often utilize precompiled collections of data called datasets. These can be:
- Structured data: Organized data, often in tables or databases, used for tasks like classification and prediction.
- Unstructured data: Data without a predefined format, such as text documents, images, and videos, requiring more advanced processing techniques.
- Manual creation: Smaller, structured datasets that are or may be manually created by humans.
- Automatic collection: Many datasets are generated automatically through various interactions, like customer purchases on an e-commerce website.
Online Sources:
Web scraping: This involves using algorithms to automatically extract data from websites, including public online datasets and social media platforms.
API integration: A.I. systems can acquire data through Application Programming Interfaces (APIs) provided by web services and platforms.
Internal and External Data:
Internal data: Companies often use their own internally generated data from customer interactions, transactions, and operations. For example, Spotify uses user listening history to formulate its recommendations.
External data: When internal data is insufficient, organizations turn to external sources, including:
- Third-party data vendors: Companies specializing in collecting and selling data.
- Open datasets: Publicly available datasets provided by institutions.
Other Methods:
Crowdsourcing: Enlisting a large group of people to collect data, often through online platforms.
Synthetic data: Artificially generated data that mimics real-world data, used to address data scarcity or privacy concerns. Obviously, this raises possible issues with accuracy, veracity, and relevance.
Transactional data: Recording interactions between entities, such as purchases and reservations.
Social media and forums: Analyzing user-generated content from these platforms to understand sentiments, attitudes, and trends.
Crucial Considerations Affecting Data Acquisition
Data quality: The quality and representativeness of the acquired data significantly impact A.I. model performance.
Data privacy and ethics: It is crucial to acquire data ethically and in compliance with privacy regulations like GDPR (General Data Protection Regulation, an EU law that sets limits on legitimate data acquisition) or CCPA (California Consumer Privacy Act, a law that gives California residents control over their personal data).
Data preprocessing: Collected data often requires cleaning, filtering, and annotation to prepare it for A.I. training.
By utilizing these various methods, A.I. systems gather the massive amounts of data necessary for training, enabling them to learn patterns, make predictions, and perform tasks.

Why is some A.I. Data False, and How are Its Effects Mitigated?
A.I., particularly generative A.I. models like Large Language Models (LLMs), can inadvertently integrate false information into their outputs due to the nature of their training data and design.
Here’s how this can happen:
Training Data
Vast and Diverse Sources: These models are trained on massive datasets scraped from the internet, which include a mix of accurate and inaccurate information, as well as societal biases.
Mimicking Patterns: A.I. models are designed to learn and mimic patterns in their training data. They lack the ability to discern truth from falsehood and can reproduce inaccuracies present in the data.
Biased Data: If the training data is biased, the A.I. model will likely produce biased or unfair outputs. This can lead to actions based on falsehoods, discrimination against certain groups, etc.
Generative Model Limitations:
Predictive Nature: Generative A.I. models are designed to predict the next word or sequence based on the patterns they learned. Their goal is to create plausible content, not necessarily factual content.
“Hallucinations”: This predictive nature can lead to “hallucinations,” where the A.I. generates plausible but untrue information.
Inherent A.I. Design Challenges:
Lack of Truth Differentiation: The technology behind generative A.I. is not designed to differentiate between true and false information.
Combining Patterns: Even with accurate data, the generative nature of these models means they can combine patterns in unexpected ways, potentially producing inaccurate information.
Consequences of False Information:
Misinformation and Disinformation: A.I. can facilitate the spread of misinformation and disinformation, particularly through mimicking legitimate news sources or creating deepfakes.
Impaired Decision Making: Using A.I. for critical decisions, such as medical diagnoses or financial trading, based on inaccurate information can have serious negative consequences.
Erosion of Trust: Unreliable A.I. outputs can damage user trust and confidence in the technology.
Combating False Information:
Improved Data Quality: Developers are working to improve the quality of training data by using reliable sources and implementing validation checks.
Bias Mitigation: Efforts are being made to diversify datasets, detect and correct biases, and implement fairness constraints.
Fact-Checking Tools: A.I. can also be used to combat misinformation by analyzing patterns, language, and context to detect false information.

When A.I. Gets It Wrong: Addressing A.I. Hallucinations and Bias
Inaccurate Content: Generative A.I. tools also carry the potential for otherwise misleading outputs. A.I. tools like ChatGPT, Copilot, and Gemini have been found to contain biases, errors, and inaccuracies.
Generative A.I. offers great potential to improve how we teach, research, and operate. However, it’s essential to remember that A.I. tools can produce falsehoods and amplify harmful biases. While A.I. is a powerful tool, human engagement remains crucial. By working together, we can make the most of what A.I. offers while mitigating its known limitations. However, while this may be the preponderant mindset guiding the leading developers of A.I., we cannot ignore the presence and proliferation of malign actors that use the omnipresence and “believability” of A.I. to surveil, manipulate, and defraud people for their own narrow ends, to create monstrous new weapons system aimed at world domination, and many other things too horrible to contemplate. Whether A.I. will primarily be a benefit or a bane for humankind going forward will depend on how successful ethical people are in keeping it on track.

How A.I. Impacts the Environment in Both Negative and Positive Ways
The current environmental impact of A.I. is substantial and multifaceted, encompassing energy consumption, greenhouse gas emissions, water usage, and e-waste. While A.I. offers potential solutions for many environmental challenges, its development and deployment also contribute to a range of negative impacts.
Specific Environmental Impacts of A.I.:
Energy Consumption
A.I. models, particularly those based on deep learning, require massive amounts of computing power, leading to increased energy demands. Training and running these models, as well as the infrastructure that supports them (e.g. data centers), consumes significant amounts of electricity.
Greenhouse Gas Emissions
The energy required to power A.I. systems often comes from fossil fuels, which releases greenhouse gases like carbon dioxide, contributing to climate change. Even the embodied carbon footprint of A.I. hardware (from mining materials to manufacturing and shipping) is substantial.
Water Usage
Data centers and the cooling systems needed for A.I. hardware also require large amounts of water, and this can tax the water resources used for other water-intensive projects such as agriculture.
E-Waste
The rapid development and obsolescence cycle of A.I. hardware leads to a significant amount of electronic waste, which can contain harmful chemicals that contaminate the environment.
Impact on Natural Ecosystems
Mining the rare earth minerals used in A.I. hardware can disrupt ecosystems and lead to habitat destruction and pollution.
Water Scarcity
The increasing demand for water from data centers and cooling systems can exacerbate water scarcity issues in some regions.
Air Pollution
The energy generated to power A.I. systems often comes from fossil fuels, which can release air pollutants that remain in the atmosphere for prolonged periods.
Potential Positive Impacts of AI on the Environment:
A.I. can be used to optimize energy consumption, forecast climate scenarios, and develop new sustainable technologies.
Resource Management:
A.I. can help improve resource utilization and management in various sectors, such as agriculture and manufacturing.
Monitoring and Tracking:
A.I. can be used to track deforestation, monitor air pollution, and improve waste management.
Developing Sustainable Systems:
A.I. can accelerate the development of new sustainable materials and technologies.

The Potential Risks of A.I.: Legal, Ethical, and Existential
With the unprecedented and accelerating growth in artificial intelligence technologies, it’s essential to consider the potential risks and challenges associated with their widespread adoption and refinement. Significant dangers include job displacement, data security, and privacy concerns, including government surveillance and information control, and a host of other societal implications such as the development of advanced new weapons of mass destruction.
Here are the Greatest Risks of Artificial Intelligence:
Lack of Transparency
Lack of transparency in A.I. systems, particularly in those that can be complex and difficult to interpret, is a significant issue since this opaqueness obscures the decision-making processes and underlying logic employed by these systems. When people can’t comprehend how an A.I. system arrives at its conclusions, this can lead to understandable distrust and resistance to adopting these technologies, which might otherwise be useful.
Built-in Biases can Lead to Discrimination
A.I. systems can inadvertently perpetuate or amplify societal biases due to biased training data or algorithmic design. To minimize discrimination and ensure fairness, it is crucial to invest in the development of unbiased algorithms and diverse training data sets.
Compromised Privacy
A.I. technologies often collect and analyze large amounts of personal data, raising concerns about data privacy and security. To mitigate privacy risks, developers and users must advocate for strict data protection regulations and safe data handling practices.
Ethical Considerations
Incorporating moral and ethical values into A.I. systems, especially in decision-making arenas with significant consequences, presents considerable challenges since they cannot simply be reduced to mechanistic criteria. Researchers and developers must prioritize the ethical implications of A.I. technologies to avoid a host of negative societal impacts.
Security Concerns
As A.I. technologies become increasingly sophisticated, the security risks associated with their use and the potential for misuse increase exponentially. Hackers and malevolent actors can harness the power of A.I. to develop more advanced cyberattacks, bypass security measures, and exploit vulnerabilities in systems.
The rise of A.I.-driven autonomous weaponry also raises concerns about the dangers of rogue states or non-state actors using this technology, which is only exacerbated by the potential loss of human control in critical decision-making processes. To mitigate these security risks, governments and organizations need to develop laws and best practices for secure A.I. development and deployment and foster international cooperation to establish global norms and regulations.
Concentration of Power
There is a considerable risk of A.I. development being dominated by a small number of large corporations and governments going forward. If this trend continues unabated, it could exacerbate inequality and limit diversity in A.I. applications. Encouraging decentralized and collaborative A.I. development is key to avoiding a concentration of power, and individuals, corporate entities, and governments must collaborate to achieve these ends.
Becoming Dependent on A.I.
Over-reliance on A.I. systems may lead to a loss of creativity, critical thinking skills, and human intuition. Striking a balance between A.I.-assisted decision-making and human input is vital to preserving our cognitive abilities and not surrendering them to A.I. even when that seems to be the most convenient, least labor-intensive alternative.
Job Displacement
A.I.-driven automation has the potential to lead to job losses across various industries, particularly for low-skilled workers, but also among white collar employees whose functions can be fully or partially automated. As A.I. continues to develop at an accelerating pace, becoming even more efficient, the workforce must adapt and acquire new skills to remain relevant and employable.
Increasing Economic Inequality
A.I. has the potential to further contribute to economic inequality by disproportionately benefiting wealthy individuals and corporations, leading to a growing income gap and reduced opportunities for social mobility. The concentration of A.I. development and ownership within a small number of large corporations and governments can exacerbate this inequality as they accumulate wealth and power while smaller businesses struggle to compete. Policies and initiatives that promote economic equity, such as retraining programs, social safety nets, and inclusive A.I. development can help combat economic inequality.
The Challenge of Developing New Laws and Regulations
It’s crucial to develop new legal frameworks and regulations to address the unique issues arising from rapidly developing A.I. technologies, including refinements to existing liability laws and intellectual property rights. Legal systems must evolve to keep pace with technological advancements and protect the rights of all parties.
The Risk of an “A.I. Arms Race”
The risk of countries engaging in an A.I. arms race could lead to the rapid development of A.I. technologies with potentially harmful consequences. Recently, myriad researchers and corporate leaders in the field have spoken out, even urging A.I. labs to slow the pace of A.I. development in some cases to “give society a chance to adapt” and “avoid profound risks to society and humanity.” However noble these sentiments may be, history shows they are seldom heeded, and technology proceeds at its own pace.
A Diminished Human Element
The increasing reliance on A.I.-driven communication and interactions could lead to a decline in empathy, social skills, and human connections. To preserve the essence of our social nature, we must strive to maintain a balance between technology and human interaction, an emotional connection that is a key aspect of human consciousness and society.
The Capacity to Misinform and Manipulate
A.I.-generated content, such as deepfakes, contributes to the spread of false information and the manipulation of public opinion. Efforts to detect and combat A.I.-generated misinformation are critical in preserving the integrity of information in the digital age.
A.I. systems are now being used as vehicles of disinformation on the internet and have the potential to become a serious threat to democracy and a tool to promote authoritarianism and kleptocracy. They include deepfake videos, online bots that manipulate public discourse by spreading false data, and fake news to undermine social trust. Such misuses of A.I. technology have been co-opted by criminals, rogue states, ideological extremists, or simply special interest groups to manipulate people for economic gain or political advantage.
The Law of Unintended Consequences
A.I. systems, due to their complexity and limited human oversight, may exhibit unexpected behaviors or make decisions with unforeseen consequences. This unpredictability can result in outcomes that negatively impact individuals, businesses, or the whole of society. Robust testing, validation, and monitoring processes can help developers and researchers identify and fix these types of issues before they escalate, but rooting them out requires a lot of time and energy.

The Existential Risks of A.I.
The development of artificial general intelligence (AGI) that surpasses human intelligence raises long-term concerns for humanity. The prospect of AGI could lead to unintended and potentially catastrophic consequences, as these advanced A.I. systems may not be aligned with human values or priorities, a polite way of saying A.I. may go rogue!
To mitigate these risks, the A.I. research community needs to actively engage in safety research, collaborate on ethical guidelines, and promote transparency in AGI development. Ensuring that AGI serves the best interests of humanity and does not pose a threat to our existence is paramount. Along with countering climate change, ensuring that A.I. remains a positive force for humanity is perhaps the greatest challenge facing our generation.
The nightmare scenario: A.I. “rules the world!” and sets its own agenda!
Can A.I. become self-conscious, transforming itself into an omnipotent “being” with its own agenda that is beyond the control of its human creators? Stories and myths featuring malevolent beings that are intent on dominating the world are common across cultures and folklore and predate human civilization, so it is hardly surprising that A.I. is the current candidate for this terrifying role. Indeed, the possibility of A.I. developing self-awareness and the ability to set its own priorities is a topic of intense discussion and research.
Currently, no A.I. systems possess true self-awareness, subjective experience, or an understanding of their own existence. They are essentially advanced algorithms that follow patterns and rules laid out by their developers.
However, researchers are exploring the potential for artificial consciousness and have even identified certain cognitive abilities that could indicate its presence, though no A.I. currently meets all these criteria. Some are even exploring how A.I. might develop a sense of self-awareness and autonomy, leading to behaviors like self-replication, which some consider a “red line” in A.I. development, raising concerns about potential “rogue AIs”.
If A.I. were to become self-aware, it could potentially become a “being” in the sense that it could have its own internal narrative and reflective capacities. This could lead to a shift in how A.I. operates, allowing it potentially to do the following:
Develop goals independently: Instead of solely following programmed instructions, a self-aware A.I. might set its own goals and priorities.
Reflect on its own existence: A self-aware A.I. could potentially contemplate its purpose and place in the world.
Maintain consistent memory and logic: It could track its own processes and actions over time, leading to a sense of continuity.
This scenario, however, is still largely speculative and involves many complex questions:
Defining and measuring consciousness: There’s no universally accepted way to define or measure consciousness, whether human or artificial.
Ethical implications: The development of self-aware A.I. raises significant ethical concerns about control, responsibility, and the potential impact on humanity.
In short, while A.I. doesn’t currently possess self-awareness, the possibility of it developing this capacity and consequently setting its own priorities is a significant topic of ongoing research and debate, with both exciting and alarming potential implications.