ai


First: what EXACTLY IS ARTIFICIAL INTELLIGENCE…


definitions:

AI

1 or A.I.eyahy ]

noun, abbreviation

  1. artificial intelligence:
    • the ability of a computer, robot, programmed device, or software application to perform operations and tasks analogous to human learning and decision making, such as recognizing speech and answering questions
    • a computer, robot, programmed device, or software application able to perform operations and tasks analogous to human learning and decision making, such as recognizing speech and answering questions
    • the branch of computer science involved with the design of computers, robots, programmed devices, and software applications able to perform operations and tasks analogous to human learning and decision making

adjective

  1. relating to or produced with the aid of a computer, robot, programmed device, or software application able to perform operations and tasks analogous to human learning and decision making (Dictionary.com, 2023)
ARTIFICIAL INTELLIGENCE

1- the capability of computer systems or algorithms to imitate intelligent human behavior

also, plural artificial intelligences: a computer, computer system, or set of algorithms having this capability.

2 – a branch of computer science dealing with the simulation of intelligent behavior in computers (Merriam-Webster, 2019).

GENERATIVE AI

variants or less commonly generative artificial intelligence

artificial intelligence (see ARTIFICIAL INTELLIGENCE above) that is capable of generating new content (such as images or text) in response to a submitted prompt (such as a query) by learning from a large reference database of examples (Merriam-Webster, n.d.).


Now that we have an idea of what AI is lets explore some of the risks and benefits of them.


5 types of AI

A review of Kilian et al.’s (2023) “Examining the differential risk from high-level artificial
intelligence and the question of control” as published in Futures journal Volume 151.


Simple Reflex Agent

Governed by immediate perceptions, this agent operates on instinctual responses to stimuli. For instance, consider the basic act of blinking to clear the vision—it’s automatic, without deliberation.

Least complex, relying on hardcoded responses.


Model-Based Agent

This agent remembers past states, like a chess player pondering future moves. It uses this information to assess and adjust to new situations, creating a dynamic model of the world as it changes.

More sophisticated, with some memory and anticipation.


Goal-Directed Agent

Purpose-driven, these agents are like marathon runners focused on the finish line. Each choice is strategic, designed to bring them closer to their objective, whether it’s a location, a state of affairs, or a numerical target.

Focused and strategic in its actions.


Utility-Based Agent

Here, the agent is akin to a savvy investor seeking the highest return, not just completion of tasks. Their actions are calculated to maximize benefit, efficiency, and effectiveness.

Complex decision-making based on maximizing utility.


Learning Agent

The most sophisticated of the group, learning agents adapt and evolve. They accumulate knowledge from experiences, refining their approach to challenges, much like a scientist formulating theories from experimental data.

Most advanced, with the ability to learn and adapt over time.


4 Paths for the Future

  • Key Characteristics: Low-impact, moderate to low likelihood, decentralized diffusion, regional dominance (US/EU), retreat from globalization, protectionism, cyber war threats.
  • Drivers: COVID-19 pandemic aftermath, increased international tensions, decline in cooperation.
  • AI Development: Slow and gradual, primarily centered on regional innovation. Low likelihood of high-level AI before 2062.
  • Risks: Sporadic outbreaks of conflict, increased cyber-attacks, race dynamics of isolation, long development timeline.
  • Benefits: Safer, more controlled AI development, potentially less disruptive impact on society.
  • Key Characteristics: Moderate impact, moderate likelihood, multipolar AI competition, technology outpacing governance, shift in power from nations to corporations, embodiment as driver.
  • Drivers: Current AI research trends, technology giants consolidating control over industries.
  • AI Development: Faster than Balancing Act, with advanced AI arriving between 2042 and 2062.
  • Risks: Structural risks to society from rapid technological advancements, monopolization, safety concerns with outpacing governance.
  • Benefits: Potential for faster progress and innovation, but with increased risks and challenges.
  • Key Characteristics: Subtle impact, variable likelihood, distributed network of intelligent agents, self-organization risks, control concerns, based on Eric Drexler’s “CAIS” framework.
  • Drivers: Planned smart grid and next-generation network technologies, potential insight into intelligence.
  • AI Development: Hybrid AI paradigm, distributed diffusion, AI ecology emergence, advanced AI between 2042 and 2062.
  • Risks: Goal alignment issues, runaway distributed systems, monopolization, safety concerns.
  • Benefits: Potentially less disruptive, distributed AI systems offering new capabilities, but with unique control and safety challenges.
  • Key Characteristics: High impact, low likelihood, rapid AI takeoff in non-Western state, AI arms race, centralized control, regional power shift (East/South Asia).
  • Drivers: Economic and geopolitical competition, nationalization of AI research, resource-prohibitive and compartmentalized AI technology.
  • AI Development: Extremely fast, with high-level systems expected within 20 years (before 2042).
  • Risks: System failures, inner misalignment, power-seeking AI, regional arms race, high potential for disruptive impact.
  • Benefits: Potentially fastest advancements in AI, but with extreme risks and potential for significant societal disruption.