In this talk, I examine contemporary artificial intelligence—comprising machine learning models trained on large datasets—as a normative endeavor. I argue that machine-learning-based AI, particularly generative models, are socio-technical assemblages that engender new forms of authority, knowledge, and subjectivity by shaping and regulating the behavior of both humans and machines through the creation of new, often implicit norms. I demonstrate that this new paradigm of AI—distinct from earlier expert systems—approaches model training as an inherently normative process, exemplified by the „alignment ethics.“
This emerging trend marks a shift in how the relationship between knowledge production and digital technology is understood: moving away from ideals of „objectivity“ and „neutrality“ toward embedding moral values into models through the automation of human moral judgment. I contend that in doing so, big tech reframes the political question of „interpretative sovereignty“ (Deutungshoheit) as an ethical issue, while unilaterally deciding what relationship with AI is desirable for society.