Auli Viidalepp

academic website

lectures, seminars and conference presentations »
projects »
publications »
teaching »
topics »

projects

PUTJD1202 [use of ai in information warfare] »

technosemiotics.net »
LITHME »

topics

This is an example list of lecture and course topics that I have presented on in the past years. I am happy to speak on these topics or research and synthesize other related foci for a talk or an intensive course.

The history and the meaning of ‘artificial intelligence’

Lecture looking at the formation of the signifier artificial intelligence (AI), the kinds of objects and technologies it signified and sought to designate. Today, AI usually signifies a form of machine learning and/or a combination of software, hardware, and data. The concept of AI is also associated with autonomy and decision-making. From a broader perspective, AI can be understood as a technical practice, a set of methods, a discursive practice, or a cultural and socio-political phenomenon. In public consciousness, AI is understood mainly through specific objects and functions. The fields of cybernetics, behavioural psychology and economic theories have also contributed to the historical understanding and development of intelligent systems and functions.

The historical entanglements of AI, society and semiotics

Semiotic models and theories have been implicitly coded within the AI field since the very beginning, as AI models seek to automate semiotic processes or elements of these. Explicit calls for a semiotic theory of computers are more recent. Computer systems can be considered as inherently semiotic due to their function as extensions of human symbolic communication and signification. At the same time, computer systems remain assemblages of interactivity led by semiotic agents (humans). This section will focus on the vocabulary and implications of such entanglements and outline the semiotic parts in these supersystems.

Technology and society

Is technology good, bad, or neutral? Can technology be neutral at all? Are humans shaping technology, or is technology shaping human culture? These are questions still puzzling many researchers. This talk looks at the general trends of technological determinism and social constructionism and introduce some authors who have shaped the understanding of technology in the 20th century and earlier.

Social constructionism and its critique

What does it mean when researchers say that something is “socially constructed”? How can an individual or a society affect the course of history? Applied to current problems of alternative reality descriptions, conspiracy theories and viral media, the model of social constructionism reveals the mythological thinking and the “semiotic first”-perception of everyday reality where people often don’t let themselves be distracted by the “brute facts” of their environment.

Language technologies and deepfakes

Until recently, upon hearing someone speaking, it was reasonable to assume it’s a human being. Now, new language technologies challenge this assumption, and we must get used to synthetically generated content - machines “speaking” and generating natural language texts. It is increasingly difficult to recognise synthetic content as such. What does this mean for our culture(s) if we cannot distinguish human-produced media from synthetic one? How do text generators such as GPT-3 and deepfake technologies interfere with our traditional understanding of communication and its counterparts?

Automation bias and anthropomorphism

According to different surveys, 6-30% of people are willing to change their opinion when confronted with a different “opinion” from a machine. The problem, called automation bias, android fallacy or (algorithm) complacency, can be constituted as part of a larger problem area – that of the anthropomorphism of technology in general, and especially of intelligent systems with a degree of autonomy. Partly, this comes down to the metalanguage used in the discourse on technology. While it is not possible to avoid anthropomorphism entirely, examples from public and specialist discourses show that when people gain experience and better understanding of the functions of the machines they are working with, anthropomorphism in their descriptive language decreases significantly. We will look at examples of studies and errors falling under automation bias and inquire how to mitigate the numerous occurring societal problems in technology design processes.

Problems with generative media

Until recently, language was the unique domain of human beings. Now, new language technologies challenge this assumption and enable the production of synthetically generated content. Additionally, deepfake algorithms enable the imitation of human voices and faces. It is increasingly difficult to recognise synthetic content as such. What does this mean for our culture(s)? How do text generators such as ChatGPT and deepfake technologies interfere with our traditional understanding of communication?

Computer systems as semiotic technologies

Computer systems are above all semiotic technologies, insofar as they mediate sign and signification processes between humans. In the recent and especially artificial intelligence type of technologies, the network of humans participating in these processes is very wide and complex, and often nearly invisible. The public media discourse further aggravates the issue by presenting intelligent technologies as characters rather than complex systems. Thus, it becomes less clear who the participating sides in the signification process are and where and how computer outputs acquire meanings. This creates a situation where machines are often attributed greater autonomy, authority and agency than they possess, ending in the excessive anthropomoprhism of technology (and AI). Additionally, our cultural history has, over several centuries, presented humans and machines as parts of the same continuum, and several disciplines, theories and research directions continue to do so today.

AI today: how synthetic content impacts the work of creative professionals

Recent text-to-text and text-to-image generators (such as ChatGPT, Midjourney or DALL-E) are frequently perceived as threats to the workflow of artists, copywriters and other creative professionals. Others have taken to extending their cognitive capacities by utilizing the generative AI. At the same time, this new way of remixing and reusing cultural texts challenges our habitual notion of the author, copyright and creative ownership. We will look at how the generative models function and their impact on societal structures.

Case study: Military technologies and their impact on society

Technology has always been developed partly with military support and for military purposes. This entanglement has shaped a vocabulary that is now common in the public discourse about intelligent or autonomous machine functions. This lecture takes a look at a selection of such concepts (unmanned, human-in-the-loop, etc.) and their impact outside the military domain.

Case study: Ancient ‘robots’ and medieval automata

The concept ‘robot’ originates from Czech writer Karel Čapek. However, automata are common through history both in the real world and in stories. We will look at the ‘living statues’ present in Ancient Greek legends and the role of mechanical temple marvels in everyday culture and religion of Ancient Greece. Moving on to medieval Europe, the automata and artifice in stories merge with the concept of Natura artifex, inspiring the 17th-century notions of mechanical nature and clockwork-world, which have in turn contributed to our contemporary models of rationality, underlying the conceptualisation of current technologies.

Case study: Superheroes, cyborgs, and artificial creatures in science fiction narratives

In contemporary science fiction, robots are often depicted as perfect humans. We will look at the spectrum of artificial, non-human or half-human creatures in fiction and their construction as the ‘human mirror’. We will also inquire into the role of fiction in relation to reality, how it criticizes contemporary society and how sci-fi imagery is used by technologists in discourse.