Share on

AI will not give us precision medicine

Authors

Abstract

The completion of human DNA sequencing in the early 2000s initially generated widespread excitement and hope that it would revolutionize medicine. Over time, however, it revealed major limitations due to a lack of understanding of the highly complex genotype-phenotype pathway. Precision medicine has emerged as a response to these biotechnological innovations, tailoring treatments based on an array of new molecular
and clinical “omics” data. However, the large volume and heterogeneity of data available today requires the use of dedicated and highly efficient computational analyses. Widely used today are artificial intelligence techniques (such as machine learning) based on artificial neural networks, i.e., a mathematical model of how biological neurons work. Here, we show that artificial neural networks have nothing to do with biology, although their popularity is largely due to their alleged ability to simulate the human brain. Furthermore, we argue that the analysis of large molecular datasets cannot be left to the computational side alone, i.e., to be exclusively data-driven, but on the contrary must meet the challenge of integrating data and expertise, of getting clinicians and data analysts to work together to take into account the absolute and ineradicable uniqueness of each patient’s characteristics.

INTRODUCTION

In the early 2000s, immediately following the completion of human DNA sequencing, all of science that matters was swept up in a tumultuous and disjointed wave of enthusiasm and optimism. And we’re not talking about a few extravagant and solitary thinkers in white coats, locked up in an ivory tower. We are talking about media in all the languages of the world, we are talking about high personalities of world politics and science. Almost everyone was convinced that: “We are learning the language in which God created life” [1] or that: “The genome project will revolutionize the diagnosis, prevention and treatment of most, if not all, human diseases” [1] and that: “Over the longer term, perhaps in another 15 or 20 years, we will see a complete transformation of therapeutic medicine” [2]. And then the front pages of newspapers and magazines around the world. For example, on June 27, 2000, the New York Times ran a full-page headline: “Genetic code of human life is cracked by scientists” and commented: “an achievement that represents a pinnacle of human self-knowledge” [3].

The Human Genome Project has faced many limitations and very serious criticisms. One of the main weaknesses was certainly that it focused mainly on the DNA sequence, initially neglecting the importance of non-coding regions and complex interactions between genes. In addition, genetic variability was underestimated because the sample of individuals used in the project did not have a composition that could adequately represent the vast global genetic diversity. The biggest disappointment of the Human Genome Project was the lack of practical solutions for the treatment of complex diseases, because the relationship between genotype and phenotype was much more complex than expected, and one of the main goals, namely, to identify the molecular causes of diseases specific to each individual, was not achieved. Precision medicine was born to address this critical issue in modern medicine.

THE DREAM OF PRECISION MEDICINE

Precision medicine originated in the field of oncology in the 1990s, when the first targeted therapies emerged that focused on patient-specific genetic mutations associated with specific tumor types. However, the term “precision medicine” has become widely used and recognized in recent years, in parallel with rapid advances in DNA sequencing technologies and molecular biology. Precision medicine has demonstrated success in several fields, including oncology, where the identification of specific mutations allows the development of targeted therapies.

In recent years, thanks to the explosion of molecular data, it has become clear that most diseases are complex, i.e., multifactorial. For example, tumors, diabetes, autoimmune or cardiovascular diseases are unfortunately common and have many interdependent “causes” related to the patient’s history. These diseases develop slowly over years, if not decades, and are often resistant to treatment. They are “long-term” diseases that result from a combination of factors such as inherited genetic predisposition, poor diet, comorbidities, environmental stress, and the aging of our body’s organs and immune defenses.

The concept behind “precision medicine” is the molecular and clinical characterization of the complex uniqueness of each disease, such as cancer. President Barack Obama expressed the idea that precision medicine offers the right treatment, at the right time, to the right person, every time [4]. Despite the disappointments and difficulties, the dream of precision medicine is still alive. In fact, in addition to DNA sequence, we now have access to molecular measurements that allow us to see every detail of a diseased cell. This “big data” varies from individual to individual and from cell to cell, and includes data on RNA, proteins, protein-DNA interactions, bacteria, and other factors. This dizzying array of measurements is called “omics” and represents an immense quantitative and qualitative leap toward complexity. Using efficient algorithms and increasingly powerful computing resources, the goal is to extract useful information to precisely define a patient’s uniqueness and tailor therapy to the greatest extent possible.

More than twenty years have passed, and despite the availability of large heterogeneous amounts of clinical, physiological, and molecular data, the long-awaited revolution has not yet manifested itself. Complex diseases remain an unsolved challenge, and the peak of knowledge achieved so far seems to be only the top of a modest hill [5]. Curiously, the great merit of the genome project seems to be that it has greatly increased the awareness of our ignorance about the mechanisms of life and disease. However, this should not be seen as a weakness; the importance of this collective enterprise cannot be underestimated, as demonstrated by the new targeted therapies able to attack proteins misfolded by genetic mutations. But we must never forget that there is still a very, very long way to go compared to what was thought in the past.

However, a potentially revolutionary turning point has been reached: we have become aware of the complexity of disease and the large amount and heterogeneity of clinical and molecular data available. Transforming this data into useful information for therapies requires a great deal of computational power from modern computer systems capable of processing data efficiently and adaptively. The key words are therefore: complexity, volume, heterogeneity, and computing power. It is also important to combine the expertise of clinical and data analysis experts to achieve the best results. In this context, artificial intelligence (AI) seems to be the perfect tool to tackle and tame the complexity of diseases. AI has already proven its effectiveness in various fields, such as generating text like ChatGPT or recognizing facial expressions in photos. This computational tool is based on the concept of an “artificial neural network”, which is able to “learn” and update itself as new data becomes available, thus proposing customized solutions based on dedicated algorithms and accurate analysis. Just what we need. Really?

THE PROMISES OF ARTIFICIAL INTELLIGENCE

If we take any blog, any newspaper, any more or less specialized magazine, even the scientific ones of the sector, any sector, even medical and biological, we find nothing but articles about articles, projects about projects, talking about the wonders of “artificial intelligence”, of new start-ups that are making stellar profits and are desperately looking for experts to hire on the fly. We seem to be in the midst of a gold rush that spares no one. More than any other sector, medicine has been fascinated by so-called “artificial intelligence” in recent years, precisely because of the extreme need to manage the immense amounts of data that are now available even in medium-sized hospitals.

Technological advances in data generation and management – especially in the biomedical field – have been developing at an accelerating pace in recent years, and the trend shows no signs of abating. The term “artificial intelligence” itself has had mixed fortunes over the last 70 years or so, with different characteristics, goals, and results attributed to it, sometimes with very different nuances. Today, it is common to use the term “artificial intelligence” to refer to any computer system that is capable of making automated inferences about the real world based on the data available to it. This is why we speak of “machine learning”, i.e., the ability to “learn” to perform tasks from examples provided by humans, such as a self-driving car or translating a language simultaneously. As you know, the applications are endless, and the list grows dramatically every day.

THE NAME OF THE ROSE

In the field of precision medicine, AI can become a very serious obstacle rather than a brilliant solution. How is this possible? To understand the key issue, let’s hear what Roger Shank, one of the leading AI theorists, professor of computer science at Yale and Stanford, who recently passed away, has to say. In an interview with CNN (https://youtu.be/PVb0OkRxRfc), we have the opportunity to see Professor Shank suggest an apparent paradox to the interviewer: “What if instead of calling it artificial intelligence, we call it a computer program?”. The interviewer is surprised, disconcerted and confused by this statement, but immediately recovers and continues: “We have been experiencing in recent years an extraordinary process of advancement of technologies and information technology. Don’t you think it’s time for AI now?”. Shank smiles sardonically and adds, “Well, then we’ll talk about very fast calculation. But making calculations extremely quickly doesn’t tell us anything about intelligence, it tells us that it could be useful to us”.

The names we give to things are very important because they define their essence and, above all, they evoke the context in which they are placed and from which they can refer to other meanings that complete them, specify them, or extend them in new directions. In other words, names create expectations, hidden expectations. It is no coincidence that the name of a pharmaceutical product often suggests its efficacy. That’s why using the term “intelligence” for something that is not “intelligence” is certainly a bad idea, but above all a very dangerous idea, especially in precision medicine.

LEARNING NEURONS

New York, 1958. A mild summer, but one of great accomplishment and even greater promise. President Eisenhower signed legislation creating the National Aeronautics and Space Administration (NASA). Jack Kilby and Robert Noyce introduced the world to the first integrated circuit, the basic building block of all electronic devices. In short, an exciting summer for the history of technology, if we forget another nuclear test in the Pacific. But that same summer, another news item, perhaps the most “explosive” of all, appeared in an internal newspaper of the Aeronautical Laboratories of Cornell University, New York. The entire summer issue was devoted to the forbidden dream of mankind: “The Design of an Intelligent Automaton” [6], signed by a senior psychologist in the laboratory who would become director of the Cognitive Systems Research Program the following year, Dr. Frank Rosenblatt, with funding from the US Navy. It is worth reading the subtitle that modestly presents the “Perceptron” to the public, viz: “A machine that senses, recognizes, remembers, and responds like the human mind”. Not bad, no doubt. The echo in the general press was surprisingly modest, with the New York Times devoting a very brief blurb to the subject with a disenchanted air [7]: NEW NAVY DEVICE LEARNS BY DOING”. Psychologist shows embryo of computer designed to read and grow wiser. The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence. The embryo – the Weather Bureau’s $2,000,000 “704” computer – learned to differentiate between right and left after fifty attempts in the Navy’s demonstration for newsmen. The service said it would use this principle to build the first of its Perceptron thinking machines that will be able to read and write. It is expected to be finished in about a year at a cost of $100,000”.

What is Rosenblatt’s “perceptron”? It is the physical realization (on a large computer) of a mathematical model of a human neuron proposed by Warren S. McCulloch, a neuroscientist, and Walter Pitts, a mathematical logician. The two American scientists published their research in 1943 [8] and the starting point is stated immediately, in the first paragraph of the abstract: “Because of the all-or-none character of nervous activity, neural events and the relations among them can be treated by means of propositional logic”.

The authors, based on the knowledge of the physiology of neurons at that time, assume that a neuron has a purely binary activity, and that therefore any event involving neurons and the relations between them can be attributed to propositional logic, that is, to the calculation of binary functions and operators. In this way, the behavior of the neuron is perfectly defined in a formal way by a law that operates on binary numbers, just like a computer. The analogy between neurons and computers is now thrown into the scientific arena.

THE MATHEMATICAL MODEL

The McCulloch and Pitts mathematical model represents the neuron as the basic element for processing data, a kind of computational atom. In fact, artificial neurons are considered as elementary units that receive one or more inputs (representing the excitatory and inhibitory electrical signals on the neuron’s dendrites) that are processed to produce an output at the axon terminals (representing the transmitted electrical signal). Each neuron communicates with the others through the axon’s ion channels. These channels consist of tiny holes that open or close depending on the voltage and concentration of substances in the regions inside and outside the cell, modulating the electrical signals in transit. The electrical activity of the neuron is typically composed of sequences of very short activations (about a millisecond) called “spikes” or “pulses”, and this explains the interpretation in binary terms of all or nothing (0 and 1). It is interesting to note how this neuron model fits perfectly with the “computational” view of the brain, which has elementary functions (inputs to the neuron that are “processed” and then transmitted to others) and the ability to connect to any number of other neurons to perform complex operations between “input” and “output”, that is, between the raw signal and the processed one for some purpose, such as “seeing”.

The brain, therefore, according to McCulloch and Pitts, would be nothing more than a disproportionately deep neural network, with an immense number of neurons (a hundred of billions) and an astronomical number of connections (about 10,000 per neuron and therefore a total of a quadrillion) and, therefore, the creation of an automaton that speaks, writes, watches “Game of Thrones”, waits for winter on the barrier, and is self-conscious is only a matter of the availability of enough time and resources. The processing of a single artificial neuron is very simple: the input signals can be “amplified” or “attenuated” by multiplying them by appropriate values (called weights) and then summing them. This value is then compared with an internal threshold (or a linear function): if it exceeds the threshold, the output is activated (possibly producing a “spike”), otherwise it remains inactive. To fully define an artificial neuron, I must therefore assign a “weight” to each incoming signal and a threshold for each neuron, which defines the activation state of the output, and which is then passed on to the next neuron.

Simply put, McCulloch and Pitts paved the way for the idea that a neuron is a piece of computer that works on binary quantities, namely 0 and 1, and that its functions are defined by how these quantities are manipulated and transmitted to other neurons. On the other hand, the brain thinks, but the brain is made up of interconnected neurons, and neurons are binary functions that can be easily achieved with an integrated circuit or, less easily, with a dedicated computer. Ergo, a computational system of “artificial neurons”, the legitimate child of the “perceptron” of which Rosenblatt speaks, can think, dance on Tik Tok, take selfies, write, be conscious. Easy, isn’t it?

Not at all. It is not enough to construct a vague approximation and mathematical brutalization of a phenomenon of frightening complexity that has emerged after two billion years of evolution and that has little or nothing to do with the “perceptron”, and then to assign to this puppet the properties of the original. It would be like drawing a sketch of a child and then thinking that the drawing can walk because it looks like the child. A tragic error, perhaps tragicomic, if it weren’t for the fact that today, more than sixty years after that mild New York summer, the world is again in the grip of the same frenzy, for the same reason and without any substantial novelty, apart from speed and number of connected neurons. But if the difference between artificial and real is so enormous, how did we come to believe that such a simplified and stylized representation could really be useful?

IF THIS IS A NEURON

In fact, McCulloch and Pitts’ methodological approach is entirely consistent and typical of modern mathematical modeling. Their success is therefore not surprising. In fact, it must be remembered that scientific activity is characterized by its ability to “neglect” details in order to grasp the essence of the phenomenon [9], which would then be the famous “difalcare gli impedimenti” (remove the obstacles) [10] of Galileo. But the fundamental point remains that among the many simplifications and distortions of reality, it is necessary to find the one that works or is useful for the purposes that interest us.

Unfortunately, there is still a myth in the world that any simplification of reality, as long as it is “mathematical” and seasoned with some vague knowledge of biology, is always enough to do something good. Mathematics in itself would be a guarantee of success. It’s not always like that, it’s never like that in biology. The idea is always the same: mathematics “captures the essential”, even if this “essential” is more like a unicorn in a world where the abstract idea of “tumor” has no place in the Platonic hyperuranium. If in physics and engineering the concept of approximation has had a great and undeniable success, the same cannot be said for biology, where the detail and the essential are not easily separable and where diversity is the heart of life: not the universal, but the particular.

An obvious example is the basic assumption of the McCulloch and Pitts model, which, starting from the empirical observation of an all-or-nothing neural activity, unhesitatingly follows that the processing of the electrical signal takes place as if it were purely logical or binary operations. It is a gigantic non sequitur, because electrical signals travel from neuron to neuron through “pulse trains”, short sequences of “spikes” or activations, and even today we do not know for sure how the real neuron encodes the information in this pulse train [11]. There are many hypotheses, the most used is that the number of spikes is counted in a certain range, but we still do not even know how the brain uses these impulse trains to manipulate the information that passes through it. But one thing is certain. The idea of the “artificial neuron” as the basic element of the brain, like quarks for elementary particles, is dead and buried, and so are all its legitimate and illegitimate children: real neurons do not speak in binary, and the McCulloch and Pitts mathematical model contains no trace of the “pulse trains” that carry information. No small matter: the encoding process is not present in the neuron model. In reality there is (the binary one), but it is wrong. The incredible thing is that even in the most modern versions of “neural networks” there is no trace of impulses. And then tell me if this is a neuron.

FICTION OR REALITY?

Let’s see why an artificial “neuron” has nothing to do with a biological neuron, and therefore has no rational connection with any form of intelligence you may have in mind. In fact, there are many huge differences between the biological neuron and the artificial neuron, even in its most modern form. Let’s look at some of them to get an idea of what we’re talking about and the sidereal distance between two concepts that share the same name and are said to have similar potential. Here they are:

  • the number of neurons and connections in a brain is physically impossible with current technology, and the connections allowed in modern networks are very few compared to the real ones by several orders of magnitude;
  • as mentioned above, real neurons encode information in the form of pulse trains;
  • the structure of the possible connections of the artificial network does not change over time. That is, new connections are not created or destroyed. The artificial neural network therefore does not have one of the most biologically relevant property of the human brain, the ability to continuously create and destroy new connections during its lifetime;
  • increasing the number of layers does not generally improve performance for a given task. This means that, in principle, an artificial neural network of greater complexity behaves worse than a simpler one, in clear contradiction to the observation that real human networks are infinitely more connected than artificial ones and seem to perform much better in many tasks;
  • artificial networks have to be programmed for each individual task, they do not program themselves, but require an external operator who organizes the phases leading to the choice of the free parameters of the network, i.e., the weights of the connections and the values characterizing the internal basic function (a threshold in the simplest case);
  • one of the most used algorithms for programming a neural network is called “backpropagation”. This algorithm has no chance of working in a real brain because there is no biological trace (yet) of it;
  • artificial neural networks have no memory, nor elements to store “facts” and “events” of the past, and their behavior is linked only to what is contained in the data used for their programming;
  • we should always remember that even if an artificial network behaves in a way that “resembles” human behavior in narrower but significant tasks, such as learning a language, similarity is not a criterion of reality;
  • each layer of artificial neurons is programmed separately, rather than having a complete network that works asynchronously, as in the real brain;
  • the layers of artificial neural networks are only connected to adjacent layers, while the structure of a brain is not organized in this way but presents neurons that can be connected to a very variable number of other neurons. Experimentally, only a few neurons are extremely well connected, while most have few connections;
  • real neural networks are extremely robust and resistant to malfunction and are able to repair themselves even after extensive damage. This is absolutely not the case with artificial neural networks, which must have the state of the system before the malfunction in order to restart once the human programmer has repaired the damage;
  • programmed artificial networks can be “copied” and transported to another network, even with a completely different technology, where it will produce exactly the same results. Obviously, we cannot do such experiments on humans, but we know very well that each brain is profoundly different from the other, and for the same inputs, the output can be very different;
  • artificial neural networks do not need to sleep, they do not get bored, they can remain without doing anything indefinitely, they can be turned off and on again;
  • the white matter of the brain plays an active role in modulating the connections between neurons and is completely absent in an artificial network.

The list goes on, but I think that’s enough. The discoverer of DNA, Sir Francis Crick, writes about this [12]: “Most of these neural “models”, are not therefore really models at all, because they do not correspond sufficiently closely to the real thing”.

THE UNIQUE AND ITS PROPERTIES

In the medicine of “unique cases”, also called “personalized medicine”, which uses the enormous masses of biomolecular data, the limits of what artificial intelligence can do are exceeded and relying on some form of “learning” from “analogous” cases, which by definition do not exist, would not only be a gamble, but a real mistake, both conceptually and practically, with very serious consequences. What does an interdisciplinary group do? Well, it behaves in exactly the opposite way to an automatic “learning” neural network, because instead of moving on the “descriptive” level of the disease in question, the discussion will focus on the “causes”, that is, on the underlying mechanisms that could support the maintenance of the pathological condition. But what if we used a “machine learning neural network” to build a multidisciplinary group in silico, i.e., computerized? The problem is that this digital group could never be replaced by a learning machine, not because of “the pride of human intelligence” or “the irreducible intuition of doctors and statisticians”, but for a much more down-to-earth reason: there is nothing to learn from the decisions of a multidisciplinary group. I repeat, there is nothing to learn. It seems strange, I know, but if you think about it, it is obvious that to “learn” according to a machine, you need many “similar” cases to “train” artificial neurons. But here the cases are all different! In fact, they ended up in the discussion of the multidisciplinary group for that very reason. So here is the bad news for fans of the latest technological innovation: “Machine learning” is not useful (in fact, it is harmful) in precision medicine, that is, highly personalized medicine that uses large amounts of data, and therefore cannot be of any help to the members of the multidisciplinary group, who will necessarily have to do without it. Instead, they need powerful data integration tools to have a common view of the patient’s micro and macro characteristics and, above all, the ability to integrate their skills, that is, to have a common view of the interpretation to be given to the data, which would then be precisely the information that can be extracted from group work.

CONCLUSIONS

The problem considered here is that if we believe that “artificial neural networks” are particularly efficient data analysis tools that can often classify satisfactorily for selected applications, such as recognizing smiling faces or simultaneously translating simple conversations, then we are in the real world, because things will work more or less well if the data we have is “good” (i.e. relevant) and if the algorithm is effective at automatically distinguishing what is relevant from what is not, using the contextual information that the programmer will provide. If, on the other hand, we think that the capabilities of a “neural network” derive precisely from the term “neural”, i.e. from its (false) ability to emulate the human neuron and its connections, then we are in the world of fantasy, where anything is possible. Unfortunately, the reason for the great success in the press and in public opinion in general is exclusively due to the fascination that the idea of a machine that thinks and perhaps becomes conscious exerts on us. If this seems exaggerated, just think of the Google employee who said that he was convinced that the artificial intelligence system they had built had developed a consciousness. Think about it: if the term “neural” were not used, could we ever associate the “neural network” with intelligence? I don’t think so. What if it were called a “distributed adaptive nonlinear approximation network”? With a name like that, no one would call it smart. Maybe boring, but not smart. Sir Francis Crick writes about this [12]: “How has this curious situation arisen? Apart from a few enthusiasts, most theorists do not believe that, for example, children really learn to speak using a single, simple back-prop network inside their heads. Why, then, are such models considered not only useful, but also exciting?”

Here is his burning observation, but full of healthy realism, on the reasons “hidden” from the general public [12]: “It is not enough to do something that works. How much better if it can be shown to embody some powerful general principle for handling information, expressible in a deep mathematical form, if only to give an air of intellectual respectability to an otherwise rather low-brow enterprise”.

Birds have always inspired humans to fly, but today’s airplanes don’t look like metal skeletons flapping synthetic-fiber wings and breathing, eating, defecating, and reproducing on their own. They do much less, of course, but what they do (fly) they do very well, and much better than the birds that inspired them. Inspiration is fine, of course, but anthropomorphizing “deep neural networks” will only lead us to misunderstand what artificial intelligence can really do. Do you want to call airplanes “artificial birds”?

The impact on precision medicine of a purely “computational” view of human knowledge can be devastating, and we are already seeing signs of it. Of course, the issue is not whether or not to use these so-called “artificial intelligence” algorithms, but to be fully aware that, as computer scientist Cathy O’ Neill says in her Ted Talk entitled “The era of blind trust in algorithms must end” (https://youtu.be/_2u_eHHzRto): “Algorithms are opinions embedded in code. It’s really different from what you think most people think of algorithms. They think algorithms are objective and true and scientific. That’s a marketing trick. It’s also a marketing trick to intimidate you with algorithms, to make you trust and fear algorithms because you trust and fear mathematics. A lot can go wrong when we put blind faith in big data”.

One cannot think of using the results of an algorithm without knowing how it was built, on what hypotheses, on what data, and on what vision of the problem that interests us. It is easy to separate the world of data analysis, of more or less intelligent algorithms, from that of the doctor, of the clinician, who must use these algorithms to diagnose, to prognosticate, to treat. The most important thing we have is at stake – our health – and we cannot afford to make mistakes, much less go wrong.

References

  1. Betsy R. Clinton and Blair hail gene “triumph”. The Guadian. 2000;.
  2. Wade N. A decade later, genetic map yields few new cures. The New York Times. 2010;.
  3. Wade N. Genetic code of human life is cracked by scientists. The New York Times. 2000;.
  4. Nicol D, Bubela T, Chalmers D, Charbonneau J, Critchley C, Dickinson J, Fleming J, Hewitt A, Kaye J, Liddicoat J, McWhirter R, Otlowski M, Ries N, Skene L, Stewart C, Wagner J, Zeps N. Precision medicine: drowning in a regulatory soup?. J Law Biosci. 2016;3(2):281-303.
  5. Joyner M, Paneth N. Promises, promises, and precision medicine. J Clin Invest. 2019;129(3):946-8.
  6. Rosenblatt F. The Design of an Intelligent Automaton. Research Trends. 1958;6(2):1-7.
  7. New navy device learns by doing. The New York Times. 1958;.
  8. McCulloch W, Pitts W. A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys. 1943;5:115-33.
  9. Bachelard G. La formation de l’exprit scientifique. Paris: Libraire Philosophique J. Vrin; 1938.
  10. Galilei G. Fiorenza, per Gio. Batista Landini; 1632.
  11. Johnson K. Neural coding. Neuron. 2000;26(3):563-6.
  12. Crick F. The recent excitement about neural networks. Nature. 1989;337:129-32.

Downloads

Authors

Lorenzo Farina - Sapienza Università di Roma

How to Cite
Farina, L. (2024). AI will not give us precision medicine. Annali dell’Istituto Superiore Di Sanità, 60(1), 8–13. https://doi.org/10.4415/ANN_24_01_03
  • Abstract viewed - 38 times
  • PDF downloaded - 23 times