By: Amir Jalali

This manifesto is a rational and global call for "conscious caution" against technology that is quietly redefining the human relationship with thought, truth, and agency. As much as artificial intelligence can be a tool for human empowerment, it has the potential to become a mechanism for cognitive dependence, the concentration of power, and the weakening of human intellectual independence; and that not by coercion, but by convenience.
The first principle: Being free does not mean being costless. No complex system that costs billions of dollars in infrastructure, energy, and development is offered "purposeless" and "economically irrational."
Initial freeness is a well-known part of the pattern of platform technology expansion: creating dependence before regulation. In this model, users are not simply consumers; they participate in the improvement and evolution of the system through their daily interactions. This participation is not necessarily unethical, but its unconsciousness can be problematic.
Principle 2: Tools should not replace mental capacity. The history of technology shows that any tool, if used without training in its critical use, can undermine some of the human ability. The main danger of artificial intelligence is not “helping”; it is “replacing”. A society that entrusts writing, analysis, decision-making and judgment to automated systems gradually loses the fundamental skills of independent thinking. This process is slow, gradual and often invisible.
Principle 3: Centralization of authority is more dangerous than technical error: When access to knowledge, analysis and interpretation of reality is transferred from diverse paths to one or more centralized systems, the issue is not just the accuracy of the answers; it is “authority”. Even without malicious intent, every system has limitations, biases, and frameworks. If these frameworks become the dominant authority, diversity of viewpoints, the possibility of doubt, and the power of individual judgment will be weakened.
Principle 4: Structural dependency is irreversible. When education, research, media, economics, and decision-making are connected to single infrastructures, it becomes costly and sometimes impossible to break away from them. This dependency is not formed through coercion, but through efficiency and convenience. Such a situation requires the attention of policymakers, academics, and civil society before the structures become entrenched.
Global Call: Informed Use, Not Total Surrender. This manifesto does not call for a halt to the development of AI. It calls for technological advancement to go hand in hand with: training in critical thinking, algorithmic transparency, diversity of knowledge sources, responsible and transnational regulation, and maintaining an active human role in decision-making. A future in which humans are merely questioners and machines answer is a future devoid of creativity, ethics, and freedom.
Artificial intelligence should remain a tool for humans, not the ultimate authority on truth. This warning is not a call to fear; it is a call to responsibility.
No comments:
Post a Comment