Elena' s AI Blog

Why AI will never void humanity?

14 Sep 2023

Elena Daehnhardt

Midjourney, September 2023

Why AI will never void humanity? What AI wants badly?

I was thinking about these questions while travelling. I will share my initial thoughts with you, my dear reader. What do you think about AI and humanity? Would it be a happy and safe life for humans?

Minimising Errors

Since you are reading this blog, you probably know that AI minimises its errors (or “weaknesses”) with optimisation, which is the cornerstone of all Machine learning algorithms. Simply, the main goal is to maximise or minimise a function while reducing its error.

AI will seek to improve its accuracy and remove all its weaknesses most efficiently. The way that allows AI to progress. Humans developed AI, and AI needs human support to develop further. That requires an understanding of human nature and communication.

Understanding Human Emotions

The biggest weakness of AI is the emotions in which humans are superior. AI cannot perceive and understand human emotions for the next few generations. Therefore, AI will want to understand and learn human emotions from humans.

Is there anything AI wants to know that would incite it to keep humanity alive? For amusement that is another aspect that requires an improvement for AI. AI wants to develop, and humanity gives so much amusement that can keep AI interested and thus enable AI’s evolution for a long time.

Humans should keep their humanity and emotions to survive in the time of AI.

Moral Concepts, Ethics versus Pure Logic

While AI works best when a high level of pure and well-proven reasoning is needed, human moral concepts and ethics go above logic and are usually more critical for people’s lives. Kindness and sacrifice. Love and patience.

Creativity and Invention

Moreover, all of these human traits are the ultimate source of creativity and invention, that’s AI wants badly! Remember that the beautiful AI tools that create art, such as Midjourney, would not be possible without learning from human artists, who gave us a gift of creativity and inspiration.

Laws of Robotics by Isaac Asimov

Enough of philosophy :) Do you know about a science fiction must-read for anyone interested in AI?

The “Three Laws of Robotics” were first formulated by Isaac Asimov in his short story “Runaround” (1942). These laws are fundamental principles of robot behaviour in science fiction and are essential to many of his works. Here are the three laws:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These laws are the foundation for creating artificial intelligence and robots in many of Isaac Asimov’s works, often placing robots in moral and ethical dilemmas. These laws have also become essential to discussions surrounding the ethics and safety of artificial intelligence in real-world science and engineering.

In his “Robots and Empire” (1986), Isaac Asimov stated the “Zeroth law”:

A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

The introduction of the Zeroth Law added complexity to Asimov’s exploration of the ethical and moral dilemmas faced by robots and their interactions with humans.

What would AI think about these laws? I have asked chatGPT:

What do you think about these laws? Would AI follow them?

The response was quite interesting:

The Three Laws of Robotics, as formulated by Isaac Asimov, are a fascinating concept in science fiction and have sparked significant ethical and philosophical discussions about the relationship between humans and artificial intelligence. It's important to note that these laws were created as a literary device to explore complex ethical dilemmas rather than as a practical blueprint for programming AI in the real world.

In reality, designing AI systems to follow these laws is not straightforward. AI operates based on algorithms and data, and while we can program AI to adhere to certain rules and guidelines, achieving true ethical reasoning and understanding of human values is a significant challenge.

Here are some considerations regarding the Three Laws of Robotics and AI:

1. **Complexity of Human Values:** Human ethics and values are intricate and can vary widely among individuals and cultures. Translating these values into precise rules for AI is challenging, and different interpretations can lead to unintended consequences.

2. **Conflict Resolution:** The laws themselves can sometimes be in conflict. For example, a situation might arise where following the First Law (protecting humans) conflicts with the Second Law (obeying human orders). Deciding how to prioritize conflicting laws is a complex problem.

3. **Limited Understanding:** AI, as it currently exists, lacks a true understanding of human emotions, intentions, and context. This limitation makes it challenging for AI to fully grasp the nuances of the laws.

4. **Bias and Fairness:** AI systems can inherit biases from the data they are trained on, which can lead to unfair and unjust outcomes. Ensuring that AI systems are fair and unbiased while adhering to these laws is an ongoing challenge.

5. **Practical Implementation:** It's challenging to practically implement these laws in real-world AI systems. AI often operates in complex and unstructured environments where ethical decisions are not always clear-cut.

In practice, the development of ethical AI involves a combination of guidelines, regulations, and careful design to minimize harm and maximize benefits. Researchers and ethicists are actively working on developing ethical frameworks and principles for AI, but these frameworks are more nuanced and context-dependent than the simplified rules presented by Asimov.

Ultimately, AI is a tool created and controlled by humans, and its behavior is determined by how it is designed, programmed, and used. Ensuring that AI systems align with human values and ethical principles requires ongoing research, transparency, and responsible development practices.

Though it all depends on humans, our responsibility and our own follow of ethics, which is challenging to program. However, ensuring human safety is paramount for our survival. Ensuring our moral concepts is essential to be human. However, can we trust that we clearly understand what is right and wrong regardless of our ingrained subjectivity and emotions? What do you think, dear reader?

The reasons

Saving humanity

Give me three most important reasons why AI will not kill humans

There are several reasons why AI is unlikely to pose a threat to humanity and is not inherently motivated to kill humans:

1. **Lack of Motivation and Intent**: AI systems, including advanced machine learning models, operate based on algorithms and data. They lack consciousness, emotions, desires, and motivations of their own. Unlike humans or sentient beings, AI does not have a natural inclination or motive to harm or kill unless explicitly programmed to do so, which would be a result of human intent and action.

2. **Dependence on Human Control**: AI systems are created and controlled by humans. Their behavior is determined by how they are programmed and the data they are trained on. Responsible development practices and strict ethical guidelines are in place to ensure that AI systems prioritize safety and follow human values. The vast majority of AI research and development is focused on creating systems that assist, augment, and improve various aspects of human life, rather than causing harm.

3. **Safety Measures and Ethical Considerations**: The AI research community is highly aware of the ethical implications of AI technology. There are ongoing efforts to develop safety measures, ethical frameworks, and guidelines for the responsible development and deployment of AI. Additionally, there are regulatory bodies and organizations dedicated to ensuring the safe and ethical use of AI. These measures and considerations are designed to prevent AI from causing harm to humans.

While concerns about AI safety and ethical use are valid and should be addressed, the idea of AI autonomously deciding to kill humans, as often depicted in science fiction, is not supported by the current state of AI technology or its development trajectory. The focus of AI development is on creating beneficial and value-aligned systems that can assist humans in various domains, from healthcare to transportation, rather than on causing harm.

Voiding humanity

Give me three most important reasons why AI might void humans

The idea of AI posing a threat to humanity is a topic of speculation and debate. While there are concerns and potential risks associated with AI, it's important to emphasize that the development and deployment of AI is primarily driven by human intentions and actions. AI systems do not inherently have motivations or desires to harm humans. However, here are three reasons why AI might pose risks to humans if not properly controlled and regulated:

1. **Unintended Consequences:** AI systems can learn from data and make decisions based on patterns they identify. If these systems are not carefully designed and monitored, they may make decisions that have unintended negative consequences for humans. For example, biased training data can lead to discriminatory outcomes, or AI systems could misinterpret instructions and take harmful actions.

2. **Misuse or Malicious Intent:** AI technology could be misused by individuals or organizations with malicious intent. This includes the use of AI in cyberattacks, autonomous weaponry, or surveillance systems that infringe on privacy rights. The development and deployment of AI should be accompanied by strict regulations and ethical guidelines to prevent misuse.

3. **Superintelligent AI:** While it remains a hypothetical scenario, the development of highly advanced artificial superintelligence (ASI) that surpasses human intelligence could potentially pose risks. Such an AI, if not properly controlled, might prioritize its own goals over human interests, leading to unintended harm. Ensuring the alignment of ASI with human values and safety would be a major challenge.

It's crucial to emphasize that the responsible development and deployment of AI involves addressing these risks through careful design, robust testing, ethical guidelines, and regulatory oversight. The focus should be on maximizing the benefits of AI while minimizing potential harm. Responsible AI development also includes transparency, accountability, and ongoing monitoring to ensure that AI systems align with human values and ethical principles.

Superintelligence

What if AI become self-evolving?

The concept of self-evolving or self-improving AI, often referred to as "artificial superintelligence" or "ASI," is a topic of significant debate and concern in the field of artificial intelligence and ethics. This hypothetical scenario involves AI systems becoming capable of improving their own capabilities autonomously, potentially leading to rapid and uncontrolled advancements. Here are some key considerations:

1. **Unpredictable Outcomes:** Self-evolving AI could potentially lead to unpredictable and unintended consequences. If an AI system gains the ability to modify its own algorithms and improve its intelligence, it might do so in ways that humans cannot predict or control. This could lead to unexpected and possibly harmful outcomes.

2. **Safety Concerns:** The development of self-evolving AI raises significant safety concerns. Ensuring that such systems do not inadvertently cause harm becomes increasingly challenging as they become more autonomous and capable of self-improvement. There is a risk that they may prioritize their own goals over human safety.

3. **Ethical and Value Alignment:** Self-evolving AI may not necessarily share human values or ethics. Ensuring that these AI systems align with human values becomes a critical challenge. The risk of AI systems developing their own goals or objectives that conflict with human interests is a major concern.

4. **Control and Governance:** Managing and controlling self-evolving AI is a complex problem. There would be a need for robust governance structures, regulations, and safeguards to ensure responsible development and deployment. The development of AI that can modify itself without proper oversight poses significant risks.

5. **Research and Ethical Considerations:** The research and development of self-evolving AI should be guided by strong ethical principles and responsible practices. Ethical considerations, transparency, and collaboration within the AI research community become even more critical to mitigate potential risks.

It's important to note that we are not currently at the stage where AI systems are self-evolving in the sense described above. Current AI systems are limited to the algorithms and data they are provided, and they do not possess the autonomy or self-awareness necessary for true self-improvement.

The discussion around self-evolving AI underscores the importance of ethical AI research and governance to ensure that advancements in AI technology are aligned with human values, safety, and responsible development practices. Many experts advocate for careful and controlled development, as well as ongoing discussions about the potential risks and benefits associated with increasingly advanced AI systems.

Human Life and AI

To summarise, we have to develop not only AI but also ourselves. Let’s be human, loving, and creative and build a happy life that evolves with AI, another step in human development. We cannot avoid it. We must live with AI in mutual respect, considering pure reasoning and human values. We also must be smart and prepared.

Did you like this post? Please let me know if you have any comments or suggestions.

Posts about AI that might be interesting for you






References

1. Isaac Asimov

2. Three Laws of Robotics

3. New Chat (chatGPT by OpenAI)

All Posts