• Opinion
  • 23 de May de 2024
  • No Comment
  • 7 minutes read

A mantra runs through Europe

A mantra runs through Europe

A mantra runs through Europe

“Artificial Intelligence won’t take your job away; someone who knows how to use it will”

Gerd Altmann. / Pixabay

License Creative Commons

 

Enrique Benítez Palma

 

It is amusing to begin an article with a paraphrase of the opening lines of the Communist Manifesto. The mantra -not the ghost- that is haunting Europe and a good part of economically advanced countries has to do with the anticipated impact of Artificial Intelligence (AI) on the labour market. And in the face of the announced cataclysm that awaits employees, employers and students, numerous organisations and the media are repeating and repeating the phrase, of unclear authorship, whose genealogy seems to lead back to Fei Fei Li, of Stanford University.

However, reading directly to Fei Fei Li, a leading figure in the field of artificial intelligence ethics, the narrative takes on a different complexion. Additionally, she is one of the founders of the Institute for Human-Centred AI. In an interview with 2024, the professor presents a clear and compelling argument: “the difference between those who understand how to use AI and those who don’t is going to have far-reaching implications”. In the same interview, she defends the role of the public sector as an evaluator of AI, and recalls the responsibility of developers and engineers to reflect on the ethical and social impacts of their technological advances. These approaches bring to the table the so-called “Oppenheimer moment” of AI, described by Benjamin Labatut in MANIAC: “if our species was to survive the 20th century, we needed to fill the enormous void left by the flight of the gods, and the only viable candidate for that strange and esoteric transformation was technology”.

For Fei Fei Li, “education is critical”. However, what kind of education is this? Is it an education that, once again, places the focus, interest and funding on learning certain tools in order to improve better job placement for students? Continuous training for workers to improve their skills—upskilling—so that companies and the public sector can fully harness the potential of AI? Don’t we also need humanistic content that, in line with Fei-Fei Li’s principles, allows us to analyse, understand, and, if necessary, criticize the application of AI when it infringes on privacy, laws, or even the most basic human rights? The answers to these pertinent questions define the model of society we believe in: a society of well-intentioned and obedient operatives, or a society of critically capable citizens ready to act as an active counterbalance to dominant narratives and large corporate interests. This is no small matter.

The concern about the consequences of the AI explosion on the labour market and employment are already a cause for concern. It is not the purpose of this paper to analyse existing reports on the impact of AI in terms of employment. However, it is worth recalling that the so-called ‘creative professions’, which were thought to be safe, are already feeling the threat posed by Generative AI, trained on masses of data without paying the corresponding royalties. The number of applications continues growing, and this expansive universe enriches and impoverishes in equal parts, not always in an equitable manner. It is rare for solutions to emerge other than the already familiar “every man for himself”!

In this debate on the education needed to face a present driven by the prevailing technological trends, big headlines and irreflexive alarmism, in which educational administrations seem absent -if they have appeared at all-, some counter-current proposals stand out that vindicate the traditional values of the humanities and social sciences. This is exemplified by the views expressed by Tallulah Holley in her blog at the London School of Economics, in which she asserts that “by prioritising STEM (Science, Technology, Engineering and Maths) education over SHAPE (Social Sciences, Humanities and the Arts for People and Economics) disciplines in schools, we are preparing students poorly for a complex future”. Professor Holley’s proposal is simple and reasonable. She asserts that “A single-minded focus on solving problems solely through STEM does not make use of the vast array of tools available to us and impoverishes all disciplines in the process. If collaboration is a fundamental skill of SHAPE, it is imperative that we work together, bringing to the table people from a wide range of backgrounds, experience and expertise”.

Rishi Sunak‘s assertions regarding the potential termination of humanities studies (“university courses that do not offer good results, with high drop-out rates and poor employment prospects, will be subject to strict controls”) and the unstoppable advance of political postulates financed by large technological corporations presage an uncertain and dark future. As Ulrich Beck observed, the solution cannot be the search for individual (biographical) solutions to systemic contradictions and problems. The explosion of AI is a systemic issue. It would be prudent to consider the social consequences of AI, and to apply ethical principles and democratic public regulation. In the absence of rigorous, humanistic education, it is inevitable that we will become the subjects of algorithms, their designers and their owners. The future is already a present reality.


Source: educational EVIDENCE

Rights: Creative Commons

Leave a Reply

Your email address will not be published. Required fields are marked *