What if, to the contrary, positive thinking represents a biased grasp of reality? What if, when I was depressed, I learned something valuable, that I wouldn’t be able to learn at a lower cost? What if it was a collapse of illusions – the collapse of unrealistic thinking – and the glimpse of a reality that actually caused my anxiety? What if, when depressed, we actually perceive reality more accurately? What if both my need to be happy and the demand of psychotherapy to heal depression are based on the same illusion? What if the so-called gold standard of therapy is just a comforting pseudoscience itself?
Computers are systems of abstractions, made from the rocks that change states. Digital computers, from hardware and software, to networks and more vague notions of intelligence all operate on pieces of rocks and metals.
To be critical is to be radically optimistic.
To be critical is to care, to listen, to think for more than myself.
To be critical is to understand the complexities around us.
To be critical is to recognize our place in the messy reality.
To be critical is to do our best to make a dent in the atrocities, which we are complicit.
To be critical is not to be negative, nor cynical, nor nihilistic, nor pessimistic.
To act on a contempt for or impatience with content as such and a desire to get on to the capital value, the usefulness, the leverage, the effect or augmentation implicit in having consumed a thing with not the taste of it in the moment in mind but the effect of the nutrients in the abstract; to reject enjoyment or supplant it with momentum; to void interpretation in favor of operationalism; to seek frictionlessness communication and consumption, to pursue the pleasure of efficiency instead of the uncertain satisfaction of interpretation, to surrender responsibility over what we do and desire, to exterminate the subject position and indulge the desire to be a machine, to be done with subjectivity and its unpredictable social integuments and reciprocities — all this is to give up on the possibility of being virtuous. Virtue is supposed to be its own reward, but we're not seeing the metrics for it.
The impulse to watch things or listen to things or read things at inhuman speeds indulges the same fantasy about becoming a machine and not needing to wrangle with interpretation or ambiguity or multiple simultaneous and contradictory possibilities. Instead, escape subjectivity into a perfectly comprehensible and operable world — into divine objectivity.
Automation deprives people of choices by claiming to fulfill them in advance, or by making the stakes of those choices seem beside the point. It tries to make swapping our will for superior processing capacity seem inevitable. “The automation of communicative processes envisions a surpassing of the pace and scale of human thought and interaction, which is why the technological imaginary tends toward post-humanism,” Andrejevic argues. “If automated systems can outstrip both human physical and mental capacities, avoiding obsolescence means merging with the machine.” This is not a humble concession to the machine’s superiority so much as the ultimate hubris. With enough surveillance and data capture in place we can assume a godlike totalizing perspective and automate the world in accordance with it.
In A Treatise on Christian Doctrine Milton describes the consequences of sin as a “spiritual death” that consists of the loss of “right reason” that manifests a “deprivation of righteousness and liberty to do good, and in that slavish subjection to sin and the devil, which constitutes, as it were, the death of the will." Sin is its own punishment because it compels sinners to the compulsion of further sin: It is the opposite of freedom. An AI, then, is sinful by definition — without will and characterized ontologically by a “slavish subjection” to its programmed purpose. AI’s relentless and limitless pursuit of self-improvement through rigid, utilitarian conceptions of rationality condemn it to predictability: It will always do the selfish thing that maximizes utility along a single axis; it can’t conceive of doing something for others without that effort being reconceived as a form of utility that accrues to itself. AI forever lacks, to use Milton’s idiom, “grace.”
In short, because AI’s strictly functionalist orientation toward “maximizing its utility function,” it is intrinsically evil. It lacks the will or capacity to do anything errant or gratuitous; it is compelled by its purpose to try to dominate.
To make information processing as efficient as possible, the point of the content of information needs to be suppressed and abstracted: signal vs. noise, rather than something experiential or interpretive in its particulars. “Wanting” to do something — desire, subjective purpose, curiosity, etc. — impedes the industrialized process of forcing more of that something (some organization of information) to happen on capital’s terms.
“Saving time” with Smart Compose ensures further objectification within work processes, more and more emails automatically spoken through us, less and less hope that it is worth thinking about what we do to live. The same is true on the consumption side, with accelerated playback. Consumption is reduced to the work of information processing and participation in capitalist circuits of value creation.
The elimination of the subject at the level of media consumption, Andrejevic argues, plays into a larger project of social deskilling, reducing communication to the sheer instrumentality suitable to the mechanized pursuit of profit and authoritarian control.