Sunday, June 4, 2023

ChatGPT, value and knowledge.

I invited my close colleague and joint author of our latest book, Guglielmo Carchedi, to write this post.

Order this book here

In a comment on Michael Roberts’ blog post concerning artificial intelligence (AI) and the new language learning machines (LLMs), the author and commentator, Jack Rasmus raised some pertinent questions, which I felt bound to take up.

Jack said: “does Marx’s analysis of machinery and his view that machinery is congealed labor value that is passed into the commodity as it depreciates apply completely to AI software-based machines that have the increasing capability to self-maintain and upgrade their own code without human labor intervention – i.e. not to depreciate?”

My answer to Jack’s’ legitimate question presupposes the development of a Marxist epistemology (a theory of knowledge), an area of research that has remained relatively unexplored and underdeveloped. 

In my view, one of the key features of a Marxist approach is to make a distinction between ‘objective production’ (the production of objective things) and ‘mental production’ (the production of knowledge). Most important, knowledge should be seen as material, not as immaterial, nor as a reflection of material reality. This allows us to distinguish between objective means of production (MP) and mental MP; both are material. Marx focused principally, but not exclusively, on the former. Nevertheless, there are in his works many hints at how we should understand knowledge.  

A machine is an objective MP; the knowledge incorporated in it (or disincorporated) is a mental MP.  So AI (including ChatGPT) should be seen as mental MP.  In my view, given that knowledge is material, mental MP are as material as objective MP. So mental MP have value and produce surplus value if they are the outcome of human mental labour carried out for capital. So AI does have human labour involved. Only it is mental labour.

Like the objective MP, mental MP are productivity-increasing and labour-shedding. Their value can be measured in labour hours. The productivity of mental MP can be measured, for example by the number of times ChatGPT is sold or downloaded or applied to mental labour processes. Like objective MP, their value increases as improvements (further knowledge) are added to them (by human labour) and decreases due to wear and tear. So mental MP (AI) not only depreciate, but also they do so at a very fast pace. This is depreciation due to technological competition (obsolescence), rather than physical depreciation. And, like objective MP, their productivity will affect surplus value redistribution. Inasmuch as newer models of ChatGPT replace older ones, due to productivity differentials and their effects on the redistribution of surplus value (Marx’s price theory), the older models lose value to the newer and more productive ones.

Jack asks: “Is this capability based on human labour or not?  If not, what does a ‘not’ mean for Marx’s key concept of the organic composition of capital and, in turn, for your (MR and mine – GC) oft-stated endorsement of the falling rate of profit hypothesis?”

My answer above has been that this ‘capability’ is indeed not only based on human (mental) labour, but it is human labour. From this perspective, there is no problem with Marx’s concept of the organic composition of capital (C). Since AI and thus ChatGPT are new forms of knowledge, of mental MP, the numerator of C is the sum of objective MP plus mental MP. The denominator is the sum of the variable capital spent in both sectors. So the rate of profit is surplus value generated in both sectors divided by (a) the sum of the MP in both sectors plus (b) the variable capital spent also in both sectors. Thus the law of the tendential fall of the profit rate is unchanged by mental MP, contrary to Jack’s hint.

To better understand the points above we need to unpack and develop Marx’s implicit theory of knowledge. This is what the following paragraphs do, albeit in an extremely succinct version.

Consider first classical computers. They transform knowledge on the basis of formal logic, or Boolean logic or algebra, which excludes the possibility for the same statement to be both true and false at the same time. Formal logic and thus computers exclude contradictions. If they could perceive them, they would be logical mistakes. The same applies to quantum computers.

In other words, formal logic explains pre-determined mental labour processes (where the outcome of the process is known beforehand and thus non-contradictory to the knowledge entering that labour process), but excludes open-ended mental labour processes (where the outcome emerges as something new, not yet known).  An open-ended process draws on a formless, potential store of knowledge, which has a contradictory nature because of the contradictory nature of the elements sedimented into it. Different from formal logic, open-ended logic is based on contradictions, including the contradiction between the potential and the realised aspects of knowledge. This is the source of the contradictions between the aspects of reality, including elements of knowledge. 

To return to an example above, for open-ended mental labour processes, A=A and also A¹A. There is no contradiction here. A=A because A as a realised entity is equal to itself by definition; but A¹A because the realized A can be contradictory to the potential A. This is the nature of change, something formal logic cannot explain.

This holds also for Artificial Intelligence (AI). Like computers, AI functions on the basis of formal logic. For example, when interrogated whether A=A and also at the same time can be A ¹ A, Chat GPT answers negatively.  Since it functions on the basis of formal logic, AI lacks the reservoir of potential knowledge from which to mine more knowledge. It cannot conceive of contradictions because it cannot conceive of the potential. These contradictions are the humus of creative thinking i.e. of the generation of new, as yet unknown, knowledge. AI can only recombine, select and duplicate realised forms of knowledge. In tasks such as vision, image recognition, reasoning, reading comprehension and game playing they can perform much better than humans. But they cannot generate new knowledge.

Consider facial recognition, a technique that compares an individual’s photograph with a database of known faces to find a match. The database consists of a number of known faces. Finding a match selects an already realized, i.e. already known face. There is no generation of new knowledge (new faces). Facial recognition can find a match much more quickly than a human can.  It makes human labour more productive. But selection is no creation. Selection is a pre-determined mental process; creation is an open-ended mental process.

Take another example. ChatGPT would seem to emulate human creative writing. Actually, it does not. It gets its knowledge from a large amounts of text data (the objects of mental production). Texts are divided into smaller pieces, phrases, words or syllables, so-called tokens. When ChatGPT writes a piece, it does not choose the next token according to the logic of the argument (as humans do). Instead, it chooses the most likely token. The written outcome is a chain of tokens assembled on the basis of the statistically most probable combination. This is a selection and recombination of already realized elements of knowledge, not the creation of new knowledge.  

As Chomsky et al. (2023) put it: “AI takes huge amounts of data, searches for patterns in it and becomes increasingly proficient at generating statistically probable outputs — such as seemingly human-like language and thought … [ChatGPT] merely summarizes the standard arguments in the literature”.

It could happen that ChatGPT produces a text that has never been thought of by humans. But this would still be a summary and re-working of already known data. No creative writing could emerge from it because new realised knowledge can emerge only from the contradictions inherent in potential knowledge.

Morozov (2023) provides a relevant example: “Marcel Duchamp’s 1917 work of art Fountain. Before Duchamp’s piece, a urinal was just a urinal. But, with a change of perspective, Duchamp turned it into a work of art.  When asked what Duchamp’s bottle rack, the snow shovel and the urinal had in common, Chat GPT correctly answered that they are all everyday objects that Duchamp turned into art.  But when asked which of today’s objects Duchamp could turn into art, it suggested smartphones, electronic scooters and face masks. There is no hint of any genuine “intelligence” here. It’s a well-run but predictable statistical machine”.

Marx provides the proper theoretical framework for understanding knowledge. Humans, besides being unique concrete individuals, are also carriers of social relations, as abstract individuals. As abstract individuals, ‘humans’ is a general designation that obliterates differences between individuals, all of them with different interests and world views. Even if machines (computers) could think, they could not think like class-determined humans with different, class-determined conceptions of what is true and false, right or wrong. To believe that computers are capable of human thinking is not only wrong; it is also a pro-capital ideology because that is being blind to the class content of the knowledge stored up in labour power and thus to the contradictions inherent in the generation of knowledge.

For more on a Marxist theory of knowledge and its relation to Marx’s law of value, see my recent paper, The Ontology and Social Dimension of Knowledge: The Internet Quanta Time, International Critical Thought, 2022 and our book, Capitalism in the 21stcentury, chapter five.

 

No comments: