FACTORS INFLUENCING THE LIKELIHOOD OF AI-ASSOCIATED RISKS

Authors

  • Dejan P. Ninković MB University, Faculty of Business and Law, Belgrade, Serbia

DOI:

https://doi.org/10.61837/mbuir020224163d

Keywords:

AI. Artificial intelligence, X-risks, existential risks

Abstract

This article deals with the overview and discussion of some of the factors influencing the mitigation of concerns associated with the AI: Cold war mentality and weaponizing; mystification; experience, knowledge base and ''wrong'' experiments; Safety vs. security; Energy and water resources; increase of technology dependence and decline of cognitive capacities; Entertainment industry influence. The list provided is non-exhaustive; in fact, listing all influential factors would require volumes of space. These are perhaps the most prominent, all of which have a negative impact on general awareness, thereby increasing the possibility of global, human-extinction-level events. Their depiction inevitably leads to the conclusion that we are heading in the wrong direction.

References

Ninković, D. (2023). Interakcija ljudi i veštačke intaligencije, TV Ras – Iz posebnog ugla. On https://www.youtube.com/watch?v=8jXlTCH0gb8 (October 12, 2023).

Hilton, B. (2022). Preventing a AI-related catastrophe, 80000 hours, on https://80000hours.org/problem-profiles/artificial-intelligence/ (January 18, 2024)

Clark, S., Martin, S. D. (2021), Distinguishing AI takeover scenarios, Alignment forum. on https:// www.alignmentforum.org/posts/qYzqDtoQaZ3eDDyxa/distinguishing-ai-takeover-scenarios (January 18, 2024)

BlueDot Impact (2023). Primer on AI Chips and AI Governance, Aisafetyfundamentals blog, on https:// aisafetyfundamentals.com/governance-blog/ primer-on-ai-chips?_gl=1*9suusg*_ga*MTQxODM1NjgzMi4xNjkwOTAzNTQw*_ga_8W- 59C8ZY6T*MTY5MzI1MTU2MC43My4wLjE2OTMyNTE1NjEuMC4wLjA.

Ninković, D. (2023). Moguće posledice američkog zakona o čipovima, TV Ras – Iz posebnog ugla. On https://www.youtube.com/watch?v=zHZFUhPgUr4 (November 16, 2023)

Gregory A. C. (2022). Choking off China’s access to future of AI. Washington DC: Center for strategic and international studies

Buchanan, B. (2020). The AI triad and what it means for national security. Georgetown: Center for security and emerging technology

Shah, R., et. al. (2022). Goal Misgeneralisation: Why Correct Specifications Aren’t Enough For Correct Goals, DeepMind Safety Research, on https://deepmindsafetyresearch.medium.com/ goal-misgeneralisation-why-correct-specifications-arent-enough-for-correct-goals-cf96ebc60924 (January 18, 2024)

Aschenbrener, L. (2023). Nobody’s on the ball on AGI alignment, forourposterity, on https://www. forourposterity.com/nobodys-on-the-ball-on-agialignment/ (January 18, 2024)

AI Safety Fundamentals Team (2022). Overviews of Some Basic Models of Governments and International Cooperation, Aisafetyfundamentals blog, on https://aisafetyfundamentals.com/governance-blog/international-cooperation-models gl=1*xnplk6*_ga*MTQxODM1NjgzMi4xNjkwOTAzNTQw*_ga_8W59C8ZY6T*MTY5ND E 3 M D c 0 M C 4 5 O S 4 w L j E 2 O T Q x N - zA3NDAuMC4wLjA. (January 18, 2024)

Downloads

Published

2024-12-27

How to Cite

FACTORS INFLUENCING THE LIKELIHOOD OF AI-ASSOCIATED RISKS . (2024). MB University International Review , 2(2), 163-172. https://doi.org/10.61837/mbuir020224163d

Similar Articles

1-10 of 18

You may also start an advanced similarity search for this article.