
Renaissance to Algorithms: The Evolution of Autonomy
Renaissance humanism, championed by figures like Michelangelo, celebrated human potential and autonomy as the foundation of an ideal society. Drawing from ancient Greek philosophy, Michelangelo and his contemporaries believed that through individual willpower and intellect, humanity could collectively shape a better world. These ideals laid the groundwork for the democratic principles that shape governance today. Philosophers like John Locke, Jean-Jacques Rousseau, and Montesquieu expanded on these ideals, advocating for intellectual freedom, reason, and self-determination. Locke’s social contract suggested that individuals consent to certain kinds of governance in exchange for the protection of their natural rights—life, liberty, and property. Rousseau argued that political authority derives from the collective will of the people, and Montesquieu’s advocacy for the separation of powers established the foundations of modern democratic governance. These frameworks assumed that rational, free individuals could govern themselves, much like the Renaissance vision of human potential.
However, just as Dolores in Westworld questions her autonomy, the rise of AI forces us to reconsider the principles upon which democracy is built. Renaissance humanists viewed autonomy as an inherent right, and Enlightenment thinkers embedded it in political theory. Yet AI introduces a new dilemma: Are we still the agents of our own governance, or are we gradually relinquishing that power to the systems we create?
The parallels between AI and human autonomy touch every aspect of society. While both are constrained only by human imagination, AI systems now make increasingly complex decisions regarding governance, control, and civil liberty. This raises critical questions about power, responsibility, justice, and control: Who is accountable when AI systems fail or make detrimental decisions? Who benefits? Who suffers? Further, whose values are programmed into the heart of the AI model itself? These are not just technical issues—they are moral ones, touching on the core values of democratic societies- ones that have not been fully defined. If AI continues to evolve without clear governance structures, we risk undermining the very autonomy and dignity on which democratic ideals are built. This would create a neverending feedback loop of our own self-governance, not particularly meant to be solved by a single or static solution ….
Democracy is not a static or fixed system, but one that evolves and improves over time through continuous feedback, adaptation, and reform. Just like AI, it’s never meant to be finished or perfect. Therefore, there is no such thing as a stable system of AI governance, because these frameworks are, always have been, and always will be iterative.
AI Governance: A New Social Contract?
AI mirrors human cognition, encapsulating our capacity for logic, decision-making, and problem-solving. But as AI develops more autonomy, we are forced to reflect on our own consciousness: How much control do we truly have? As we shape AI, does AI, in turn, shape us? Will this transformation lead to positive change or unforeseen consequences?
A recent survey by The Center for Data Innovation and Public First revealed that Americans are divided on whether AI will improve society or create new challenges. Only 32% of respondents felt confident explaining how modern AI models work, and nearly half doubted their ability to identify AI-generated content. Despite AI’s potential to drive economic growth, just 18% believe it could prevent future social conflict. Meanwhile, 55% of Americans believe AI will achieve human-level consciousness within the next decade, and 40% fear that AI could eventually destroy civilization.
This tension reflects the promise and peril of AI: it can either amplify human potential or undermine the values upon which democratic societies rely. As AI increasingly makes decisions on our behalf, from what we see online to how we access services, the lines between human agency and machine autonomy blur. Are we truly in control of our future, or are we gradually allowing AI to govern it for us? And, given that we built AI, how does AI governance differ from our own?