GENEALOGIES: Tracing lineages of Compromise

THIS CHAPTER CONSISTS OF 3 PARTS:
 

1. GLITCH A/EFFECT


WRITTEN (THEORY)


>>> ALT.HISTORY FROM ARTIFACT TO EFFECT
(2010 - ...)
>>> A stranger like Dada / Weird like quaint collage ¯_( ͡ఠʘᴥ ⊙ಥ‶ʔ)ノ/̵͇̿̿/̿ ̿ ̿☞ (2016)
>>> “We already know too much for noise to exist” (2016)
>>> LEXICON OF GLITCH AFFECT (2014)

ARTWORKS (PRACTICE)

>>> Glitch Timond (2014)


2. COLOUR TEST CARDS

WRITTEN (THEORY)

>>> APPROP/R/PIRATE>>> BEHIND WHITE SHADOWS (OF IMAGE PROCESSING) (2017)
>>> ADDENDUM (2017)


ARTWORK (PRACTICE)

>>> Pique Nique pour les Inconnues (2017 - 2020)
>>> 365PERFECT i.R.D DE/CALIBRATION ARMY (2017 - ... )
>>> PATCH (for the DE/CALIBRATION ARMY) (2017)






While history often refers to the study of lines of descent and origin, the development of families and the tracing of their lineages, in reality, especially in the digital realm, the development of material does not follow any traditional lines of descent. If at all, the ‘historical continuity’ of digital material is one of breaks, voids, bends, forks, in-betweens, legacy, instabilities, ossification, abandonment and turns. In fact, there is no such thing as a ‘technological continuum’. Rather than ‘the history of a digital material’, there are many, parallel, interconnected non-linear, fragmented and overlapping discourses which impact each other in many directions. Thus, digital material, is best described following a genealogical model.

Genealogy does not pretend to go back in time, to restore what Foucault calls an "unbroken continuity, that operates beyond the dispersion of forgotten things". Genealogy is a specific type of history that deconstructs that which once was unified and makes a continuity of discontinuities. It researches the descents and emergences of how systems of affiliation come into play and maps the understanding and meaning of the object accordingly.

Genealogy considers the many affiliated, interconnected and (geo-)fragmented processes that build their own discourses: it intends to shine a light on why particular technologies develop a social-political momentum in a specific point in time and how this momentum changes over time. To write a genealogy means to write the stories of emergence of a use or practice; it reveals the pre-existing battles present at the moment of arising. It threads different strands constructed from ambiguous, pre-existing discourses, it (inter)connects or juxtaposes generations of different communities and their working methods, conceptual themes and politics.

There is no such thing as a complete history. There are only the many stories from different perspectives, derived from uncertain interpretations, that are neither true nor false. The many stories of media technology are constantly subject to revision:

while their language systems emerge, meanings shift, idioms ossify and vernacular turns to affectual signifiers.
... and then they change again.

︎︎︎︎︎︎

1. Foucault, Michel. "Nietzsche, Genealogy, History." in: the Foucault Reader, ed. P. Rabinow (Harmondsworth: Penguin) 1984. p.81






GENEALOGY: GLITCH A/EFFECT
FROM ARTIFACT TO A/EFFECT
(OR: HOW GLITCH BECOMES AN INDEX FOR MEANING)

WRITTEN (THEORY) 

>>> ALT.HISTORY FROM ARTIFACT TO EFFECT (2010 - ...)
>>> A stranger like Dada / Weird like quaint collage ¯_( ͡ఠʘᴥ ⊙ಥ‶ʔ)ノ/̵͇̿̿/̿ ̿ ̿☞ (2016)
>>> “We already know too much for noise to exist” (2016)
>>> LEXICON OF GLITCH AFFECT (2014)





 
ARTWORKS (PRACTICE)

>>> Glitch Timond (2014) 


FROM ARTIFACT TO EFFECT

ALT.HISTORY (GENEALOGY OF MACROBLOCKS)


DE/CALIBRATION
TARGET
⬓⬒◨◨◧

HOW NOT
TO BE READ.PDF
▣⬓⬒◨◧

DCT ENCRYPTION
STATION

◨◧◧⬓◨

JPEG
◧◧⬓◨⬓◨

KEY
⬓⬓⬓

























A stranger like Dada / Weird like quaint collage  ¯_( ͡ఠʘᴥ ⊙ಥ‶ʔ)ノ/̵͇̿̿/̿ ̿ ̿☞
“Your work is so.. so weird…”

Even though the sentence was uttered playfully and with no foul intentions, it hit me. It sounded dismissive; in my ears, my friend just admitted disinterest. Calling something “weird” suggests withdrawal. The adjective forecloses a sense of urgency and classifies the work as a shallow event: the work is funny and quirky, slightly odd and soon becomes background noise, ’nuff said. I tried to ignore the one word review, but I will never forget when it was said, or where we were standing. I wish I had responded: “I think we already know too much to make art that is weird.” But I unfortunately, I kept quiet. In his book Noise, Water, Meat (1999), Douglas Kahn writes: “We already know too much for noise to exist.” A good 15 years after Kahns writing, we have entered a time dominated by the noise of crises. Hackers, disease, trade stock crashes and brutalist oligarchs make sure there is not a quiet day to be had. Even our geological time is the subject to dispute. But while insecurity dictates, no-one would dare to refer to this time as the heyday of noise. We know there is more at stake than just noise. This state is reflected in critical art movements: a current generation of radical digital artists is not interested in work that is uninformed by urgency, nor can they afford to create work that is just #weird, or noisy. The work of these artists has departed from the weird and exists in an exchange that is, rather, strange. it invites the viewer to approach with inquisitiveness - it invokes a state of mind: to wonder. Consequently, these works break with tradition and create space for alternative forms, language, organisation and discourse. It is not straightforward: its the art of creative problem creation.In 2016 it is easy to look at the weird aesthetics of Dada; its eclectic output is no longer unique. The techniques behind these gibberish concoctions have had a hundred years to become cultivated, even familiar. Radical art and punk alike have adopted the techniques of collage and chance and applied them as styles that are no longer inherently progressive or new. As a filter subsumed by time and fashion, Dada-esque forms of art have been morphed into weird commodities that invoke a feel of stale familiarity.But when I take a closer look at an original Dadaist work, I enter the mind of astranger. There is structure that looks like language, but it is not my language. It slips in and out of recognition and maybe, if I would have the chance to dialogue or question, it could become more familiar. Maybe I could even understand it. Spending more time with a piece makes it possible to break it down, to recognize its particulates and particularities, but the whole still balances a threshold of meaning and nonsense. I will never fully understand a work of Dada. The work stays a stranger, a riddle from another time, a question without an answer. The historical circumstances that drove the Dadaists to create the work, with a sentiment or mindset that bordered on madness, seems impossible to translate from one period to the next. The urgency that the Dadaists felt, while driven by their historical circumstances, is no longer accessible to me. The meaningful context of these works is left behind in another time. Which makes me question: why are so many works of contemporary digital artists still described—even dismissed—as Dada-esque? Is it even possible to be like Dada in 2016? The answer to this question is at least twofold: it is not just the artist, but also the audience who can be responsible for claiming that an artwork is a #weird, Dada-esque anachronism. Digital art can turn Dada-esque by invoking Dadaist techniques such as collage during its production. But the work can also turn Dada-esque during its reception, when the viewer decides to describe the work as “weird like Dada.” Consequently, whether or not today a work can be weird like Dada is maybe not that interesting; the answer finally lies within the eye of the beholder. It is maybe a more interesting question to ask what makes the work of art strange? How can contemporary art invoke a mindset of wonder and the power of the critical question in a time in which noise rules and is understood to be too complex to analyse or break down?The Dadaists invoked this power by using some kind of ellipsis (…): a tactic ofstrange that involves the with holding of the rules of that tactic. They employed a logic to their art that they did not share with their audience; a logic that has later been described as the logic of the madmen. Today, in a time where our daily reality has changed and our systems have grown more complex. The ellipses of mad logic (disfunctionality) is commonplace. Weird collage is no longer strange; it is easily understood as a familiar aesthetic technique. Radical Art needs a provocative element, an element of strange that lures the viewer in and makes them think critically; that makes them question again. The art of wonder can no longer lie solely in ellipsis and the ellipsis can no longer be THE art. This is particularly important for digital art. During the past decades, digital art has matured beyond the Dadaesque mission to create new techniques for quaint collage. Digital artists have slowly established a tradition that inquisitively opens up the more and more hermetically closed—or black boxed—technologies. Groups and movements like Critical Art Ensemble (1987), Tactical Media (1996),Glitch Art (±2001) and #Additivism (2015) (to name just a few) work in a reactionary, critical fashion against the status quo, engaging with the protocols that facilitate and control the fields of, for instance, infrastructure, standardization, or digital economies. The research of these artists takes place within a liminal space, where it pivots between the thresholds of digital language, such as code and algorithms, the frameworks to which data and computation adhere and the languages spoken by humans. Sometimes they use tactics that are similar to the Dadaist ellipsis. As a result, their output can border on Asemic. This practice comes close to the strangeness that was an inherent component of an original power of Dadaist art.But an artist who still insists on explaining why a work is weirdly styled like Dada is missing out on the strange mindset that formed the inherently progressive element of Dada. Of course a work of art can be strange by other means than the tactics and techniques used in Dada. Dada is not the father of all progressive work. And not all digital art needs to be strange. But strange is a powerful affect from which to depart in a time that is desperate to ask new critical questions to counter the noise.

Notes
1. Douglas Kahn, Noise, Water, Meat: A History of Sound in the Arts (Cambridge, MA: MIT Press, 1999), p. 21. 
2. On cool as ellipsis, Alan Liu. in The Laws of Cool. 2008.
3. “We need creative problem creation” - jonSatrom during GLI.TC/H 20111. 
4. Within glitch art this subgenre is sometimes referred to as Tactical Glitch Art.


“We already know too much for noise to exist”
Douglas Kahn, Noise Water Meat: 1999 (p. 21)

As the popularization and cultivation of glitch artifacts is spreading, it is interesting to track the development of these processes in specific case studies. One case study of a compression artifact, referred to as ‘datamoshing’, tells an especially interesting account of glitch cultivation.

The datamosh artifact is located in a realm where compression artifacts and glitch artifacts collide. The artifact caused by compression is stable and reproducible, as it is the effective outcome of keyframes being deleted. The outcome of this deletion is the visualisation of the indexed movement of macroblocks, smearing over the surface of an old keyframe. This makes the video morph in unexpected colours and forms.

In 2005, Sven König embarked on his exploration into the politics of file standards, through this particular datamoshing effect, and in relation to the free codec Xvid. Xvid is a primary competitor of the proprietary DivX Pro Codec (note that Xvid is DivX spelled backwards), which is often used for speedy online video distribution through peer-to-peer networks. In aPpRoPiRaTe! (Sweden: 2005) König used the codec to manipulate and appropriate ‘complete video files found in file sharing networks’. His work included an open source software script that could be used to trigger the compression-effect in realtime. Through the use of the Xvid codec and copyrighted material, König tried to pinpoint the tension between the usage of non-proprietary compression codecs and their uptake in DRM (Digital Rights Management) remix-strategies.

In his next project, Download Finished! (2007), König explored how the codec could be used to transform and republish found footage from p2p networks and online archives. The result became the rough material for his online transformation software, which translated ‘the underlying data structures of the films onto the surface of the screen’. With the help of the software, file sharers could become ‘authors by re-interpreting their most beloved films’.

A swift maturation of the datamoshing effect took place in 2009 at the same time as Paul B. Davis was preparing for his solo show at the Seventeen Gallery in London. Davis’ show was partially based on a formal and aesthetic exploration of the artifact. While the show was intended to critique popular culture by way of datamosh interventions, this culture caught up with him overnight, when the effect penetrated the mainstream just prior to the opening of his show. Davis’ reaction to the fate of appropriation plays out as the opening quote of this chapter: ‘It fucked my show up...the very language I was using to critique pop content from the outside was now itself a mainstream cultural reference’.

Prominent music videos, including Kanye West’s Welcome To Heartbreak (2009, directed by Nabil Elderkin) and Chairlift’s Evident Utensil (2009, Ray Tintori) indeed had popped up bringing the datamoshing effect into the mainstream via MTV. The new wave of interest in the effect generated by these clips, lead to a Youtube tutorial on datamoshing, followed by an explosion of datamosh videos and the creation of different datamosh plugins, developed by for instance the Japanese artist UCNV, the director of the Evident Utensil Video Bob Weisz or Goldmosh Sam Goldstein.

In the 2010 GLI.TC/H festival in Chicago, thirty percent of the entries were based on the datamoshing technique (around 80 of a total 240). The technique that was used to critique popular culture, by artists like König or Davis, was now used to generate live visuals for the masses. Datamoshing had become a controlled, consumed and established effect. The aesthetic institutionalization of the datamoshing artifact became more evident when Murata’s video art work Monster Movie (2005), which used datamoshing as a form of animation, entered the Museum of Modern Art in New York in an exhibition in 2010.

This ‘new’ form of conservative glitch art puts an emphasis on design and end products, rather than on the post-procedural and political breaking of flows. There is an obvious critique here: to design a glitch means to domesticate it. When the glitch becomes domesticated into a desired process, controlled by a tool, or technology - essentially cultivated - it has lost the radical basis of its enchantment and becomes predictable. It is no longer a break from a flow within a technology, but instead a form of craft. For many critical artists, it is considered no longer a glitch, but a filter that consists of a preset and/or a default: what was once a glitch is now a new commodity.


Over the past decennia, the glitch art genre has grown up so much: glitch (and glitch art) is not just an aesthetic in digital art, glitch is in the world now.

I wrote the Glitch Moment(um) a little over 10 years ago. A main point then was that every form of glitch, either accidental or designed, will eventually become a new form or even a meaningful expression. Since then, digital technologies have reinforced their ubiquitous and pervasive presence. And with their ubiquity, artefacts such as cracked screens, broken images, colour channel shifts and other form of noise have become every day occurrences. In fact, everything seems to be littered with glitch. Glitches are on the flyer of my local falafel shop. They are in the commercials of my least favourite politicians. I can even deploy different types of glitches as a face filter on instagram. As a result, glitches have moved far away from being just a scary, or unexpected break; they are no longer just a moment of digital interruption - a moment when what lies ahead is unknown. The glitch is in the world now, not just as a digital event but also as a meaningful signifier; a figure of speech or a metaphor, with its own dialect and syntax. Just think about how in the movies, ghosts still announce their presence by adding analogue noise to a digital signals, or how blocky artifacts often signify a camera travelling through time. How lines and interlacing often describe an alien compromise of our telecommunication systems and how hackers still work in monochrome, green environments.

From its beginnings, glitch art used to exploit medium-reflexivity, to rhetorically question a ‘perfect’ use, or technological conventions and expectations. Artists adopted the glitch as a tool to question how computation shapes our everyday life. But today, distortions prompt the spectator to engage not only with the technology itself, but also with complex subcultural and meta-cultural narratives and gestures, presenting new analytical challenges. In short, the role of glitch in our daily lives has evolved and the glitch art genre has grown up.

But besides re-evaluating the study of glitch as a carrier of meaning, the glitch, or the digital accident, has also evolved on a fundamental level; in timing and space. Due to the networked nature of digital technologies, digital accidents are now decentralised; their cause and effects ripple through platforms, while the timing of these accident is no longer linear. The glitch no longer takes place as a linear sequence of events (interruption - glitch - debugging or collapse); and its interruptions do not happen momentarily, but instead as randomly timed pings inviting collapse or complexity anywhere the network reaches.

On the flip side, while the dominant, continuing search for a noiseless channel is still a regrettable, ill-fated dogma, we are filtering, suppressing and dismissing noise and glitch more widely than ever. As a result of this insight, I recently shifted my research to Resolution Studies. In a small new book, titled Beyond Resolution (2021), I describe the standardization of resolutions as a process that generally imposes efficiency, order and functionality on our technologies. But I also write that resolutions do not just involve the creation of protocols and solutions. They also entail the obfuscation of compromise and black-boxing of alternative possibilities, which as a result, are in danger of staying forever unseen or even forgotten. In this new book I deploy the glitch as a tool, for visiting and re-evaluating these compromises. I have experienced that while the glitch has evolved and changed, the glitch is still as powerful as a decade ago.


Glitch Art genre
As the popularization and cultivation of the glitch genre has now spread widely, I believe it is important to track the development of these processes in specific case studies and create ‘a lexicon of distortions’. New, fresh research within the field of noise artifacts is necessary. In an attempt to expand on A Vernacular of File Formats, I propose a lexicon that deconstructs the meanings of noise artifacts; a handbook to navigate glitch clichés as employed specifically in the genre of Sci-Fi.

This Lexicon intends to offer an insight into the development of meaning in the aesthetics of distortion in Sci-Fi movies throughout the years, via an analysis of 1200 Sci-Fi Trailers. Starting with trailers from 1998, I reviewed 30 trailers per year to obtain an insight into the development of noise artifacts in Sci-Fi from before the normalization of the home computer, to Sci-Fi adopting the contemporary aesthetics of our ubiquitous digital devices. My source for the trailers is the Internet Movie Database, where I accessed lists of the top-US Grossing Sci-Fi Titles per year. When watching these trailers I took screenshots whenever a distortion occured. Then, if possible, I would interpret them. Currently the database includes findings from research done into 630 trailers (1998-2018) but I wish to extend it to 1980-2020, spanning the 40 years of advancements in digital technologies and its distortions.

Sci-Fi relies on the literacy of the spectator (references to media technology texts, aesthetics and machinic processes) and their knowledge of more ‘conventional’ distortion or noise artifacts. Former disturbances have gained complex meaning beyond their technological value; with the help of popular culture, these effects have transformed into signifiers provoking affect. For example, analogue noise conjures up the sense of an eerie, invisible power entering the frame (a ghost), while blocky-artifacts often refer to time travelling or a data offense initiated by an Artificial Intelligence. Interlacing refers to an invisible camera, while camera interface esthetics (such as a viewfinders and tracking brackets or markers around a face) refer to observation technologies. Hackers still work in monochrome, green environments, while all holograms are made from phosphorous blue light. And when color channels distort, the protagonist is experiencing a loss of control. 

︎Click on a year and see all the a/effect per trailer of that year!

1998 - In the Truman Show, CCTVs secret observation cameras are outfitted with scanlines and vignetting.
1999 - Unicode characters displayed as streams of monochrome, vertical data signify the hackers navigating ‘the Matrix’.
2000 - Sci fi screens feature a lot of blue because sets often use tungsten (warm) light. Filmmakers compensate for this in post processing, during which blue colors are effected the least, maintaining the vibrancy of other colors the best. In this shot from Supernova, a critical SOS signal is received.
2001 - in Jimmy Neutron, an alien observes the parents. A voice over says: “The crummy aliens stole our parents”
2002 - S1m0ne, a digital simulation, is deactivated and falls apart in a million pixels. 

2003 - “The machines are starting to take over!” is uttered when T-X knocks out the terminator. A combination of what seems like digital and analogue, monochrome red distortions cover the ‘interface’ of the Terminators point of view as he goes down. 
2004 - In The Manchurian Candidate, soldiers are kidnapped and brainwashed for sinister purposes. Some of the shots use military night vision equiplement. 
2005 - In Stealth, an artificial intelligence program has “rewired itself and chosen its own target”. Blue, phosphorous holograms are flanked by non understandable diagrams and information. 
2006 - ‘experimental surveillance technology’ uses grid like, monochrome maps on top of maps in Deja Vu
2007 - Umbrella Corp uses surveillance technology, which uses tracking brackets and facial markers to compare a target to an image file.
2008 - Interruptions in live television streams are no longer illustrated by analogue noise, but by macroblocking artifacts (referencing new .mp4 and streaming technologies) in Quarantine. 
2009 - A fantastic year for glitch artifacts in sci fi, my favorite trailer is the Fourth Kind, which features monchrome EVP alongside analogue, wobbulating video registrations.
2010 - The good old text based, cyano green (old and hacker) console that functions as a portal to Tron.
2011 - In Source code, a soldier can not only jump back in time but also into someone else's body. These jumps are always rough and confusing. Aesthetically, the jump look a body fell apart into little triangular vectors travelling a somewhat noisy, blocky [that must be the time shift] wire plane.
2012 - Looper is set in 2074. In this time, when the mob wants to get rid of someone, the target is sent into the past, where a hired gun awaits. Time jump problems are shown by a sliced image with ghosting colors. 
2013 - In Elysium, Max is observed through a broken monitor. It is so action packed, even the color channels are no longer properly aligned. 
2014 - During a fighting scene between Electro, who has the ability to control electricity, and Spiderman, the billboards of Times Square go all glitchy. ÷ The Amazing Spider-Man™ 2 (2014) was shot on KODAK VISION3 Color Negative Film.
All the bill boards glitch and finally explode, while Kodak is the of the last billboards left standing.
 
2015 - A group of teens discover secret plans of a time machine, and construct one. However, things start to get out of control. This is when blocking artifacts occur (similar to DV blocking when a tape is being FFWD). 
2016 - In Captain America, archival footage features very clean and clear (digital) scan lines. 
2017 was a very interesting year for noise artifacts. Several trailers use them very meaningfully. In the Ghost in the Shell, ‘noise’ is more complex than contemporary compression artifacts (combining color channels, blocks, lines and some structures that are not, as far as I can see, directly referencing any compression). This must signify the ghost, existing and developing as a very complex creature inside the networks.
2018 - In Annihilation, a biologist is confronted with a mysterious zone where the laws of nature (and distortion) don't apply. Here destortions are not destroying something, but they are ‘making something new’.


3 screen render for NXT: Still Processing, Amsterdam, Netherlands, 2025.

a desktop tele-chorale organised by the Angel of History;
Congregating the women whose faces are appropriated
for the use on color-calibration test cards

Together, they form the i.R.D. Perfect De/Calibration Army.





WRITTEN (THEORY)

>>> APPROP/R/PIRATE
>>> BEHIND WHITE SHADOWS (OF IMAGE PROCESSING) (2017)
>>> ADDENDUM (2017)


ARTWORKS (PRACTICE)

>>> Pique Nique pour les Inconnues (2017 - 2020) 
>>> The i.R.D Perfect DE/CALIBRATION ARMY and 365PERFECT (2017 - ...) 
>>> PATCH (for the i.R.D Perfect DE/CALIBRATION ARMY) (2017) 




Over the years, I have found many different copies of the image of my face (hailing from the Vernacular of File Formats) used both with and without permission or accreditation.
>Here< you can find a small collection of examples

I believe and publish with a Copy <it> Right ethic:

First, it’s okay to copy! Believe in the process of copying as much as you can; with all your heart is a good place to start – get into it as straight and honestly as possible. Copying is as good (I think better from this vector-view) as any other way of getting ’there.’ 
NOTES ON THE AESTHETICS OF ‘copying-an-Image Processor
– Phil Morton (1973).

To me, copying as a creative, exploratory, and educational act is free and encouraged, provided proper accreditation is given. However, when copying becomes commodification and profit is anticipated, explicit permission must be sought and compensation may be requested.
 
As I began finding my face on commercially available objects, commodified and uncredited, I wondered what it meant to lose authorship and ownership of my face, and whether historical precedents exist.

During my research I came across the stories of color test cards: photographs of Caucasian women used to calibrate analog and digital image-processing technologies. These women have been reused endlessly, yet very little—sometimes not even their names—is known about them.


This resulted in the research essay Behind White Shadows, which was published in:

Behind White Shadows essay for solo show (TRANSFER gallery, NY, 2017)
Faceless, De Gru
yter, 2018 (ed: Bogomir Doringer)
Performing the System (ed: Nora Brünger, Luzi Gross, Torsten Scheid: 2019, Universitätsverlag Hildesheim)
Computer Grrls (ed. Inke Arns, HMKV, 2021)

Behind White Shadows was also featured in the Cyberfeminism Index (ed. Mindy Sue, 2023)




1. A GENEALOGY OF THE COLOUR TEST CARD











Pique Nique Pour Les Inconnues (2020)
a desktop tele-chorale organised by the Angel of History; 
a congregation of women whose faces
appropriated for the use on color-calibration test cards
now sing Paul McCartney’s “We All Stand Together,” their voices substituted by desktop sounds,
Together, they form the i.R.D. Perfect De/Calibration Army.


In my essay Behind White Shadows of Image Processing (see below), I describe how standardisation, especially through color-test cards, has shaped the history of image processing.
Pique Nique pour les Inconnues extends that critique as a desktop tele-chorale in which the Angel of History convenes the often nameless figures: test-card models, bots, virtual assistants, stock-photo placeholders, and other “female objects” engineers long relied on to test and calibrate image quality or to perk-up or make more amicable architectural and virtual spaces.
Although their faces persist endlessly through copying and reuse, their names and identities have vanished. Gathering on a desktop, they attempt to recover their voices by performing “We All Stand Together” using only computer-desktop sounds.



BEHIND WHITE SHADOWS OF IMAGE PROCESSING (2017 - ... ) 

Shirley, Lena, Jennifer and the Angel of History.

[[About the loss of identity, bias, image calibration and ownership]]

INTRODUCTION
While digital photography seems to reduce the effort of taking an image of the face, such as a selfie or a portrait, to a straightforward act of clicking, these photos, stored and created inside (digital) imaging technologies do not just take and save an image of the face. In reality, a large set of biased - gendered and even racist - protocols intervene in the processes of saving the face to memory. As a result, what gets resolved, and what gets lost during the process of resolving the image is often unclear.

To uncover and get a better insight into the processes behind the biased protocols that make up these standard settings, we need to come back to asking certain basic questions: who gets decide the hegemonic conventions that resolve the image? And why and how do these standards come into being? Through an examination of the history of the color test card, I aim to provide some answers to these issues.

In her 2012 White Shadows: what is missing from images lecture at the Gdansk Academy of Fine Arts in Poland, Hito Steyerl speaks about how new technologies force us to reformulate important questions about the making visible, capturing, and documenting of information. In the first part of her talk, Steyerl focuses on the use of 3D scanning in forensic crime scene investigations. Steyerl explains how the 3D scanner, or LiDAR technology (Light Detection And Ranging), sends laser beams that reflect off of the surfaces of the objects that are being scanned. In this process, each point in space is measured and finally compiled as a 3D facsimile point cloud of a space. Steyerl states that this kind of capturing does not just provide a completely new image of reality or possibility for capturing the ‘truth’. In fact, she takes issue with the general belief that this type of new technology should be described as the ultimate documentary, forensic tool; a tool that produces 100% reliable, true evidence.

Just like any other technology, Steyerl argues, VR has its affordances, and with these affordances come blind spots: for instance, only a few scanning rigs are advanced enough to capture a moving object. Generally, a moving object becomes a blur or is not picked up at all. A “2.5D” scanning rig (a rig with just one 3D scanner that scans a space) can only provide the surface data of one side of the scanned object space. As a result, the final scan of an object or space includes blind spots: the back of the objects or shadows cast by objects in front of an object which, depending on the displaying technology, sometimes show up as an empty, white shell.

“What becomes visible on the back of the image is the space that is not captured. The space that is missing, missing data, the space where an object covers the view. A shadow. […] Documentary truth and evidence also [ed. rosa: includes] the missing of information. The missing people themselves.”

To really scan an environment properly, the scanning would have to be done from every angle. But in the case of most 3D scanning, certain elements of the scanned environment will always exist in the shadow of the next object, and result in white patches, blank spaces or hollowed out shells, remaining in the dataset only as a log of the scanners unseen, unregistered space. An important question then, not just for 3D, but for any technology is: who decides the point of view, and who stands behind the perspective from which the LiDAR—or any scanning or imaging technology—is operated? Who is casting these white shadows? In order to formulate a possible start to an answer for these questions, what follows is a history of resolutions, specifically the history of the color test card.



A collage of different resolution test cards on top of each other. 
Test images
A fundamental part of the history of image-processing and the standardization of settings within both analog and digital compression as well as codec technologies is the test card, chart, or image. This standard test image is an image (file) used across different institutions to evaluate, for instance, image processing, compression algorithms, and rendering, or to analyze the quality of a display. One type, the test pattern or resolution target, is typically used to test the rendering of a technology or to measure the resolution of an imaging system. Such a pattern often consists of reference line patterns with clear, well-defined thicknesses and spacings. By identifying the largest set of non-distinguishable lines, one determines the resolving power of a given system, and by using identical standard test images, different labs are able to compare results, both visually, qualitatively, and quantitatively.

A second type of standard test image, the color test card, was created to facilitate skin-color balancing or adjustment, and can be used to test the color rendering on different displays, for instance. While technologies such as photography, television, film and software all have their own color test images, these types of test images all typically involve a norm reference card showing a Caucasian woman wearing a colorful, high-contrast dress. Even though there were many different “Shirleys” (in analog photography) or “China Girls” (in color film chemistry) that modeled for these test cards, they were never created to serve variation. In fact, the identities of the many Shirleys who modeled for these norms stayed unknown and formed a “normal” standard, as is often written on these color test cards. As such, the cards cultivated a gendered, race-biased standard reference, which even today continues to influence our image-processing technologies. In his 1997 book White, British film studies professor Richard Dyer observes the following: "In the history of photography and film, getting the right image meant getting the one which conformed to prevalent ideas of humanity. This included ideas of whiteness, of what color — what range of hue — white people wanted white people to be.”

The de-facto, ‘ideal’ standard that has been in play since the early part of the twentieth century for most analog photo labs has thus been positively biased towards white skin tones, which naturally have a high level of reflectivity. As a result it was not only difficult to capture darker and black skin tones, but it also proved impossible to capture two highly contrasting skin tones within the same shot; when trying to capture a black person sitting next to a white person, the reproduction of any African-American facial images would often lose details and pose lighting challenges, and finally present ashen-looking facial skin colors that contrast strikingly with the whites of eyes and teeth. Hence, the Caucasian test card is not about variation, but about setting a racist standard, which has been dogmatically implemented for over 40 years.




Analog photographies’ Shirley cards
Photographic film stock's failures to capture dark skin tones aren't a technical issue, but a choice. Scholar Lorna Roth writes in her 2009 article “Looking at Shirley, the Ultimate Norm” that film emulsion could have been designed with more sensitivity to the continuum of yellow, brown and reddish skin tones. However, this choice needed to be motivated by recognition of the need for an extended range; after the development of color film for cinema Kodacolor (1928) and Kodachrome for still photography (1935), there seemed to be little motivation to acknowledge or cater to a market beyond white consumers.

It was only when chocolate production companies and wooden furniture manufacturers complained about the impossibilities they faced when trying to reproduce different shades of brown, that Kodak’s chemists started changing the sensitivities of their film emulsions (the coating on the film base which reacts with chemicals and light to produce an image), and gradually started to extend the abilities of the film stock towards a greater dynamic range, or ratio between the maximum and minimum measurable light intensities (white and black, respectively). Progress was made during the 70s and 80s. But in 1997 Kodak’s dynamic range made a real leap forward with the introduction of its popular consumer film Gold Max. Roth notes how Kodak executive Richard Wien described this development within the sensitivity of film stock as being able to “photograph the details of the dark horse in low light.” Still, in the real world, true white and black do not exist — only varying degrees of light source intensity and subject reflectivity. Moreover, the concept of dynamic range is complex and depends on whether one is calculating a capturing device (such as a camera or scanner), a display device (such as a print or computer display), or the subject itself.

This is why around the same time that these changes in sensitivity of film emulsion took place, the color test card, albeit only slightly, was also revisited. First, in the mid-90s, Japanese photography companies redesigned their Shirley cards using their own stock images from their own color preference tests. Since then, the local reference card featured Japanese women with light yellow skin. Finally, in 1995, Kodak designed a multiracial norm reference card. From the single “Caucasian” woman surrounded by the necessary color balancing information codes, Kodak’s Shirley has now evolved into an image of three women with different skin colors (Caucasian, Asian, African American), dressed in brightly colored, contrasted clothing.





John P. Pytlak posing with his invention: LAD girl

Laboratory Aim Density (LAD) system for Kodak. 

LAD color test strip


Have your highlights lost their sparkle?
And the midtones lost their scale?
Are your shadows going smokey?
And the colors turning stale?
Have you lost a little business to labs whose pictures shine?
Because to do it right – takes a lot of time.
Well, here’s a brand new system. It’s simple as can be!
Its name is LAD – an acronym for Laboratory Aim Density.
– John P. Pytlak
.

Motion picture color correction: China Girls vs the Maureen the LAD girl
In a similar vain to analog photography, from the 1920s to the early ’90s, the analog motion picture industry had its own color test equivalent, named color-timing. The term ‘timing’ hails from the days before automated printers, when the photo chemical process used a timer to determine how long a particular film strip had to sit in the developer. During the decades of color-timing, hundreds of female faces or ‘China Girls’ (which some have described as a reference to the porcelain mannequins used in early screen tests) appeared in the film leaders, typically only for 1-4 frames, never intended to be seen by anyone other than the projectionist.

The color-timing practice was not completely reliable; it involved a different China Girl and slightly different lighting each time. One of the reasons why around the 1980s, the technology was gradually superseded by the Laboratory Aim Density (LAD) system, developed by John Pytlak. Along with color-timing, the anonymous China Girls, whose occupancy ranged from studio workers to models, became artifacts of an obsolete film history and only one “LAD Girl” became the model for the color reference card: Maureen Darby. Pytlak describes that “It was primarily intended as ‘representative’ footage, and not a standard.” By filming two 400-foot rolls of 5247 film, “all film supplied since the introduction of LAD is made from the same original negative, either as a duplicate negative, and now as a digital intermediate.”

Two decades later, after spending a year and a half on the restoring of lost color strip images, Julie Buck and archivist Karin Segal finally found a way to bring the China Girls, or women of color-correction, to the spotlight. Rescuing the China Girls from the margins of cinema, they intended to recast them as movie stars in their own right. In their 2005 “Girls on Film” exhibition statement, Buck and Segal write: “Even though these women were idealised, they were only seen by a handful of men. Their images exist on the fringes of film. They were abused and damaged. We wanted to give them their due.” Buck and Segal were unable to find any cases of China Girls-turned-film actress and finally used their collection of images to create the short Girls on Film (2008). In which they recast them as stars of the short.

Carole Hersee on Test Card F, which aired on BBC Television from 1967 to 1998.
Marie McNamara at the NBC studios. 

“You know what a black-and-white test pattern is,” she told The New York Times in 1953.
“Well, I’m it for color. I’m the final check.”
- Marie McNamara
 
One standard does not fit all (or: physics is not just physics)
The onset of color television brought no big surprise; in this medium too, producers hired Caucasian ladies as their test models, reinforcing longstanding biases in gender and race—the only difference being that in television, the objectified test model was known by her real name. The red-haired model Marie McNamara, for instance, became known in the 1950s when she modeled to calibrate the NBC television cameras, while Carole Hersee is known as the face of the famous Test Card F (and latter J, W, and X), which aired on BBC Television from 1967 to 1998.

Cameramen continued to use Caucasian color girls — either live models or photographs — to test their color settings. If an actor with a different skin color entered the scene, the calibration process was supplemented with special lighting or makeup techniques, to ensure that the non-white participants looked good on screen—a task that is not always easy and deferred the development and implementation of adequate, non-biased technologies. Lorna Roth concludes in her seminal article that the habitual racism embedded within color reference cards did more than just influence major standard settings, such as the tone of hue, chroma, contrast, quantization, and lightness (luminance) values. To her, it is also responsible for the highly deficient renderings of non-Caucasian skin tones, which have resulted in an ongoing need for compensatory practices. While a ‘one size fits all’ or as a technician once explained to Roth: “physics is physics” approach has become the standard, in reality, the various complexions reflect light differently. What this reveals is a composite interplay between the different settings involved when capturing the subject. Despite the obvious need to factor in these different requirements for different hues and complexions, television technically only implemented support for one: the Caucasian complexion.

Moreover, the history of color bias did not end when old analog standards were superseded by digital ones; digital image (compression) technologies too, inherited legacy standards. As a result, even contemporary standards are often rooted within these racist, habitual practices and new digital technologies still feature embedded racial biases. For instance, in 2009 and 2010 respectively, HP webcams and the Microsoft’s XBox Kinect controller had difficulties tracking the faces of African-American users. Consumer reports later attributed both problems to “low-level lighting”, again moving the conversation away from important questions about skin tone to the determination of a proper lighting level, still echoing a dull, naive physics is physics approach.

My collection of Caucasian testcards in the Behind White Shadows exhibition.
Lena JPEG
In his retrospective article “How I Came Up with the Discrete Cosine Transform” (DCT), Nasir Ahmed describes his conception of the use of a Cosine Transform in the field of image compression. Ahmed writes how he proposed the National Science Foundation (NSF) to study the application of the cosine transform, however, and much to his disappointment, the NSF did not support the proposal, because the whole idea seemed “too simple.” Ahmed decided to keep working on the problem, ultimately publishing his results in the January 1974 issue of IEEE Computer Transactions. Today, more than 40 years after Ahmeds proposal, DCT is widely used in digital image compression. The algorithm has for instance become a core component of the JPEG image compression technology, developed by the JPEG Experts Group.

“I remember dedicating the whole summer of 1973 to work on this problem. The results that we got appeared too good to be true, and I therefore decided to consult Harry Andrews later that year at a conference in New Orleans. […] When I sent the results back to Harry Andrews, he suggested that I publish them. As such, I sent them to the IEEE Computer Transactions, and the paper was then published in the January 1974 issue. […] Little did we realize at that time that the resulting “DCT” would be widely used in the future!”
.

Just shortly after Ahmed’s initial proposal, during the Summer of 1973, the implementation of DCT in digital image compression also became a subject of experiments conducted by the University of Southern California’s (USC) Signal and Image Processing Institute. In a 2001 newsletter, Jamie Hutchinson offers an insightful retrospect of the testing of DCT, focusing on the implementation of, again, a Caucasian, female color test card. In the piece, Hutchinson quotes Alexander Sawchuk, who reminisces his efforts on the implementation of the test card during his time as assistant professor of electrical engineering. Sawchuk explains how he and his colleagues felt tired of the normal test images or “dull stuff”, “They wanted something glossy to ensure good output dynamic range, and they wanted a human face. Just then, somebody happened to walk in with a recent issue of Playboy.” Sawchuk moves on to describe they ripped out the centerfold of the Playboy and used its top third part to scan with their Muirhead scanner, which they had customized with analog-to-digital converters to create a 3-channel, 512 x 512px test image. After the tricky process was finished, Sawchuk realized that they had lost a line during the process of scanning. Moreover, the timing of the analog-to-digital converters was off, making the final test image slightly elongated compared to the original. However, because of time pressure, the engineers settled for the distorted version and simply replicated the top line to arrive at 512. Those three sets of 512 lines—one set for each color, created imperfectly—would become a de facto industry standard.

The Miss November 1972 centerfold, that the USC employees used for testing the implementation of DCT, featured Caucasian model Lena Söderberg (born: Lena Sjööblom). Her image, ‘the Lenna’ (spelled with double n to promote the right pronounciation) quickly became the single most used picture in image-processing research and even one of the first pictures uploaded to ARPANET, the precursor of today’s internet. In A Note on Lena (1996), David Munson, University of Illinois professor and editor-in-chief at IEEE Transactions on Image Processing, explains why he believes the Lena image became an industry standard: “First, the image contains a nice mixture of detail, flat regions, shading, and texture that do a good job of testing various image processing algorithms. It is a good test image! Second, the Lnna image is a picture of an attractive woman. It is not surprising that the (mostly male) image-processing research community gravitated toward an image that they found attractive.” Munson moves on describing why the Lena image has become such an issue: “some members of our community are unhappy with the source of the Lena image. I am sympathetic to their argument, which states that we should not use material from any publication that is seen (by some) as being degrading to women.”

While the use of the Lena image remained a topic of discussion, and its rights were never properly cleared or even checked with Playboy, by 1991, SIPI (USCs Signal and Image Processing Institute) actually started distributing the image of Lena for a fee, to researchers all over the world. While Lena was regularly found on the pages of image-processing journals, books, and conference papers, Playboy finally became aware of these transgressions when the Journal of Optical Engineering featured Lena on its July cover. In August 1991, Optical Engineering received a letter from Playboy Enterprises, Inc. asking them, “as fellow publishers”, to cease any unintentional, unauthorized use of the image and to contact Playboy for permission for any future use of their copyrighted material. The International Society for Optical Engineering (SPIE) responded, arguing that “[t]he image is widely used in the worldwide optics and electronics community. It is digitized and its common use permits comparison of different image processing techniques and algorithms coming out of different research laboratories.” They also pointed out that SPIE is a nonprofit scientific society and that the material published by SPIE is intended for educational and research purposes.

SPIE reached an understanding with Playboy, but in a January 1992 editorial, SPIE editor Brian J. Thompson warns that “it is each author's responsibility to make sure that materials in their articles are either free of copyright or that permission from the copyright holder has been obtained.” On the other side, Eileen Kent, Vice President of new media at Playboy publicly commented on the issue - “We decided we should exploit this, because it is a phenomenon” - and granted SPIE authorization for all further use of the image. According to publications director at SPIE Eric Pepper, “it was almost as if Lena had entered the public domain by that time. Almost, but not quite.”

The Lenna, Playboy centerfold, 1972.


In May 1997, almost 25 years after being Miss November, Lena Söderberg attended the 50th anniversary of the Imaging Science and Technology (IS&T) Conference in Boston. Jeff Seideman, the president of the Boston IS&T, had arranged for Lena to appear and after the event, Seideman started working with Playboy's archivist to re-scan Lena's image and compile the missing information, including the type of photo emulsion used to make the print featured in the magazine, and the technical specifications of the scanner. As a result, Seideman hoped that the image of Lena would remain a standard reference image for compression technologies throughout the 21st century. Today, the standard Lena test image is still downloadable from several laboratory sites.

But the controversy around the Lena image did not end in the 90s. In 2001, David Munson, editor of the IEEE’s image processing journal, wrote: “It was clear that some people wanted me to ban Lena from the journal […] People didn’t object to the image itself, but to the fact that it came from Playboy, which they feel exploits women.” Rather than ban Lena, Munson wrote an editorial in which he encouraged authors to use other images. “We could be fine-tuning our algorithms, our approaches, to this one image,” he says. “They will do great on that one image, but will they do well on anything else?” In 2016, Scott Acton, editor of IEEE Transactions, proposed to the journal’s editorial board to instate an prohibition on the use of Lena in any published research: “In 2016, demonstrating that something works on Lena isn’t really demonstrating that the technology works.” Acton believed that the Lena image “doesn’t send the right message” to female researchers about their inclusion in the field. But Acton’s strongest objections were technical in nature: “Lena contains about 250,000 pixels, some 32 times smaller than a picture snapped with an iPhone 6. And then there’s a quality problem: The most commonly used version of the image is a scan of a printed page. The printing process doesn’t produce a continuous image, but rather a series of dots that trick your eye into seeing continuous tones and colors. Those dots, Acton says, mean that the scanned Lena image isn’t comparable to photos produced by modern digital cameras. Short of an all-out ban in the journal, he says, making authors aware of the image’s technical and ethical issues might be a way to usher Lena gracefully into retirement.”

While it is clear that the use of the Lena image opened a discussion about embedded bias and the consideration of gender in test card usage, there are still many questions that remain unanswered: how much are the performance, texture and materiality of digital photography actually influenced by the use of the image of a Caucasian Lena? What would it have meant for the standardization of digital image compression if the image chosen for the test card would have been the first African-American Playboy centerfold Jennifer Jackson (March 1965), or if the 512x512px image had instead featured the image of Grace Murray Hopper, one of the first pioneers in computer programming and person responsible for inventing some of the first compiler-related tools—moreover, the woman who, coincidentally, coined the widely used computer slang “bug”? Or Christine Darden, an African American researcher at NASA, pioneering supersonic aircrafts. How much do the compression standards we use on a day to day basis reflect the complexities of the ‘good’ 512x512px Lena image; and how well do these standard settings function when capturing another kind of color complexity?




Christine Darden in the control room of NASA Langley's Unitary Plan Wind Tunnel in 1975. Credit: NASA

︎ From Youtube. John Knoll. Jennifer Knoll in Paradies. With multiplier effect in Photoshop.
“Dear Jennifer,
Sometime in 1987, you were sitting on a beach in Bora Bora, looking at To’opua island, enjoying a holiday with a very serious boyfriend. […] This photograph of a beautiful moment in your personal history has also become a part of my history, and that of many other people; it has even shaped our outlooks on the world at large. John’s image of you became the first image to be publicly altered by the most influential image manipulation program ever.” […] In essence, it was the very first photoshop meme—but now the image is nowhere to be found online.
Did John ask you if he could use the image? Did you enjoy seeing yourself on the screen as much as he did? Did you think you would be the muse that would inspire so much contemporary image making? Did you ever print out the image? Would you be willing to share it with me, and so, the other people for whom it took on such an unexpected significance? Shouldn’t the Smithsonian have the negative of that image, not to mention digital backups of its endless variations?
All these questions have made me decide to redistribute the image ‘jennifer in paradise’ as well as I can, somewhat as an artist, somewhat as a digital archeologist, restoring what few traces of it I could find. It was sad to realize this blurry screen grab was the closest I could get to the image, but beautiful at the same time. How often do you find an important image that is not online in several different sizes already?”

︎ Constant Dullaart: Jennifer in Paradise – the correspondence. 2013.
.


Jennifer in Paradise
A woman is sitting with her back towards us, topless, on a beach. Silver sand, blue water, a green island in the distance. We can’t see her face but we know her name: Jennifer. This photo, taken in 1987 by one of the two original creators of Photoshop, John Knoll, became the standard test image for the development and implementation of Photoshop and its suite of creative effects. Twirling, deleting and copying Jennifer were just some of the processes that were tested on the image. At that time, the early days of digital computing, there was not a large array of digital images available, which is why this 24-bit scan of a holiday photo of John’s soon-to-be Jennifer ‘Knoll’ became a standard test image for all of the development of Photoshop. It is also one of the reasons why the image did not disappear when Photoshop moved out of its development phase; when Photoshop was finally ready for public demonstrations, John and his brother Thomas used the image again and again in public and online demos. "It was a good image to do demos with," John Knoll recalls. "It was pleasing to look at and there were a whole bunch of things you could do with that image technically.”

As Dutch artist Constant Dullaart explains in his Chaos Computer Club presentation The Possibility of an Army, John Knoll confirmed an age-old motif: a man objectifying a female body. But besides being critical, Dullaart also underlined the special cultural-historical value of the artifact, which formed a key inspiration for his 2013 Future Gallery solo show Jennifer in Paradise. In this show, Dullaart focused on the excavation and exhibition of a reconstruction of the Jennifer image. In an open letter accompanying the show, Dullaart describes the image of Jennifer as an important artifact in the history of software development and as an anecdote in Adobe history. He also asks Jennifer to share the original image file with the world. A sentiment that was later echoed by Gordon Comstock in a 2014 piece for the Guardian, in which he describes the image as “central to the modern visual vernacular as Eadweard Muybridge’s shots of galloping horses or the first use of perspective.” In a way, just like the Lena image, Jennifer has become ‘a phenomenon’.

While Dullaart never obtained any rights or permissions for the use of the Jennifer image, he did digitally reconstruct the original image and created an image series consisting of Photoshopped versions, materialized as wallpapers and a series of prints featuring enthusiastically filtered Jennifers (twirled, blurred, etc.). Dullaart also spread the digitally reconstructed version of the original image with an added a payload: he steganographically added messages to the reconstructed JPEG image file. By doing so, he intended to treat the JPEG image not just as an image, but as a unique container format for content, to open a debate on the value of the digital file (format). The reconstructed Jennifer JPEG is not just a format that carries the reconstructed image information; via steganography it has become an unique container and placeholder to discus the materiality of digital photography. In terms of monetization of the material, Dullaart only sells the password to the encrypted payload added to the reconstructed version of the original JPEG—the access to his secret message. Finally, in an effort to translate the work to the context of the gallery, Dullaart organized a performance, in which he briefly showed his secret message written in phosphorescent paint on top of the wallpaper by shining a blacklight on its surface, followed by a destruction of the blacklight as a metaphor for encryption (and inaccessibility).

Dullaart never received a direct response from Jennifer or John Knoll to his request to enter the original image into the public domain or to gift it to an (media) archeological institution such as the Smithsonian. Remarkably, for his Guardian article, Comstack did manage to get a short response from both.

John Knoll seems unconvinced: "I don't even understand what he's doing," he says, bristling at the idea of the image being reconstructed without permission (ironically using Photoshop). Jennifer is more sanguine: "The beauty of the internet is that people can take things, and do what they want with them, to project what they want or feel," she says.”

And maybe even more remarkable is the fact that the Guardian article features just one image: the original Jennifer in Paradise photo taken by John Knoll, embedded on the newspapers website (and thus finally entering the digital domain). Albeit indirect, Dullaart had now fulfilled one of the main goals of his solo show.


The stories of standardisation belong to high-school textbooks, while the violence of standardisation should be studied in every university curriculum. By illuminating these stories, we will reveal (and may possibly undo) the white shadows.

Similar to Constant Dullaart’s call for Photoshop’s “Jennifer” to enter the Smithsonian archive, one way to expose the habitual whiteness of color-test cards is to insist that these standard images, so often obfuscated by technological histories, enter the public domain; they need to lose their elusiveness and become common knowledge.






---- 
References / a great deal of inspiration for this text came from the amazing research undertaken by:

Lorna Roth: Looking at Shirley, the Ultimate Norm: Colour Balance, Image Technologies, and Cognitive Equity (2009) 
video: color film was build for white people. Here's what it did to dark skin (2009).
project: the colour balance project. 

Hito Steyerl: White Shadows (2012).

Constant Dullaart: Jennifer in Paradise – the correspondence (2013).

James Bridle: The Render Ghosts (2013).
in which he first connected the stories of Lena and Jennifer Knoll.

David C. Munson: A note on Lena, in: IEEE Transactions on Image Processing 5.1 (1996): p. 3–3.

Scott Acton in: Corinne Iozzio: The Playboy Centerfold That Helped Create the JPEG, in: The Atlantic (02/09/2016).




Pique Nique pour Les Inconnues
Telegram stickerset based on the essay Behind White Shadows ︎︎
The stickerset was made for Berlin Zentrum der Netzkunst

🕐🕑🕒🕓🕔🕕🕖🕗🕘🕙🕚🕛🕧🕜🕝🕞🕟🕠🕡🕢🕣🕤🕥🕦

The stickerset is made of the 24 clockface emojis, each connected to a sticker of an Inconnue (an unknown woman or shell without ghost) and their history.
Every half an hour is now set to a reminder of one of the shells.

Telegram sticker set  (Can only be used on smartphone when you have the Telegram app installed)

    

l’Inconnue de la Seine (after 1900)
The Unknown Woman of the Seine is a death mask of an unidentified young woman that became a popular fixture on the walls of artists’ homes after 1900. She featured in various artists works ranging from books to theatre and film. But while many artists felt inspired by her borrowed visage, little but her moniker is known about her and she remains forever an asset with missing values.
- A radiolab podcast
🕐💀

Audrey Munson Bust (1913-1915)
The bust “The Spirit of Life”, by Daniel Chester French (1913-1915), inspired by ‘America’s First Supermodel’ Audrey Munson. Munson was the inspiration for more than 12 other statures in New York City, and many, many others elsewhere. Chances are that when you cross a statue, it might be modelled after her.
- Greyer, Andea. Queen of the Artists' Studios The Story of Audrey Munson. 2007

🕑🗽

Color-timing control strips
Officially known as color-timing control strips, these anonymous female film studio workers were affectionately dubbed "china girls" by the industry, but are also known as leader ladies or
lilys. The images in this show were meant only for use by the processing lab to match color tones in the associated film.  They were often film lab workers themselves.
- Leaderlady project at SAIC
- Julie Buck and Karin Segal: Girls of Film
- Lili on the web
- Leaderladies and friends
- Sprocket girls

🕒🎞📽
Miss NBC (1953)
The onset of color television brought no big surprise; in this medium too, producers hired Caucasian ladies as their test models, reinforcing longstanding biases in gender and race—the only difference being that in television, the objectified test model was known by her real name. The red-haired model Marie McNamara, for instance, became known in the 1950s when she modeled to calibrate the NBC television cameras, while CBS used a girl named Patty Painter.
“You know what a black-and-white test pattern is,” she told The New York Times in 1953.“Well, I’m it for color.
I’m the final check.”- Marie McNamara
- Living Test Patterns: The Models Who Calibrated Color TV

🕓📺
Two Bit Teddi Smith (1961)
Lawrence G. Roberts used two different, cropped 6-bit grayscale scanned images from Playboy's July 1960 issue, featuring Playmate Teddi Smith, in his MIT master's thesis on image dithering.
-Original thesis

🕔💾◼️◽️
Carole Hersee (1967)
Hersee is known as the face of the famous Test Card F (and latter J, W, and X), which aired on BBC Television from 1967 to 1998.

🕕📡
Lenna or Lena (1972)
is the name given to a standard test image widely used in the field of image processing since 1973. It is a picture of the Swedish model Lena Söderberg, shot by photographer Dwight Hooker, cropped from the centerfold of the November 1972 issue of Playboy magazine.
The spelling "Lenna" came from the model's desire to encourage the proper pronunciation of her name. "I didn't want to be called Leena".
- More in Behind White Shadows
- Finding Lena, the Patron Saint of JPEGs

🕖💽🍑
Maureen the LAD girl (1974)
The color-timing practice was not completely reliable; it involved a different China Girl and slightly different lighting each time. One of the reasons why around the 1980s, the technology was gradually superseded by the Laboratory Aim Density (LAD) system, developed by John Pytlak. Along with color-timing, the anonymous China Girls, whose occupancy ranged from studio workers to models, became artifacts of an obsolete film history and only one “LAD Girl” became the model for the color reference card: Maureen Darby.
Pytlak describes that “It was primarily intended as ‘representative’ footage, and not a standard.” By filming two 400-foot rolls of 5247 film, “all film supplied since the introduction of LAD is made from the same original negative, either as a duplicate negative, and now as a digital intermediate.”
- John P. Pytlak Technical Achievement LAD

🕗📊

Anya Major (1984)
is an American television commercial that introduced the Apple Macintosh personal computer. It was conceived by Steve Hayden, Brent Thomas and Lee Clow at ChiatDay, produced by New York production company Fairbanks Films, and directed by Ridley Scott. English athlete Anya Major performed as the unnamed heroine and David Graham as Big Brother.
- Apple 1984 commercial

🕘🛠
Kodak Shirley card (until 1995)
The identities of the many Shirleys who modeled for these normal standard cards stayed unknown. As such, the cards cultivated a gendered, race-biased standard reference, which even today continues to influence our image-processing technologies.
In his 1997 book White, British film studies professor Richard Dyer observes the following: "In the history of photography and film, getting the right image meant getting the one which conformed to prevalent ideas of humanity. This included ideas of whiteness, of what color — what range of hue — white people wanted white people to be.”
- Looking at Shirley, the Ultimate Norm: Colour Balance, Image Technologies, and Cognitive Equity

🕙☣️
Jennifer in Paradise (1996)
In 2013, Constant Dullaart wrote an open letter to Jennifer Knoll, who was the model for the Photoshop test image and as a result, the model for the first digitally altered image. In his letter, he requested to enter the original image into the public domain or to gift it to an (media) archeological institution such as the Smithsonian. He never received a direct response, but he did digitally reconstruct the original image and created an image series consisting of Photoshopped versions, materialized as wallpapers and a series of prints featuring enthusiastically filtered Jennifers (twirled, blurred, etc.).
Remarkably, in a response for the Guardian, the original image was published.
- Constant Dullaart: Jennifer in Paradise – the correspondence
🕚‼️♊️
Kodaks Multi-Racial norm reference card (1995)
The de-facto, racist standard that had been in play since the early part of the twentieth century in most analog photo labs has been positively biased towards white skin tones, which naturally have a high level of reflectivity.
As a result it was difficult to capture darker and black skin tones and proved impossible to capture two highly contrasting skin tones within the same shot; when trying to capture a black person sitting next to a white person, the reproduction of any African-American facial images would often lose details and pose lighting challenges, and finally present ashen-looking facial skin colors that contrast strikingly with the whites of eyes and teeth.
From the single “Caucasian” woman surrounded by the necessary color balancing information codes, Kodak’s Shirley evolved into an image of three women with different skin colors.
- Lorna Roth Color film was built for white people. Here's what it did to dark skin. (2015)

🕛✅📸
Jen Kind, the Everywhere Girl (1996)
Before microstock was even a thing, model Jen Kind (then Jen Anderson) posed for a stock photo shoot in Portland, Oregon as a pensive, inquiring college student, pen in hand, ready to learn.
Her image wound up being published thousands of times on book covers, in self-help guides, and major advertising campaigns. Anderson’s likeness embodied a certain widely malleable narrative that photo editors, art directors, and marketers could use to illustrate a range of projects, and led her to be branded “Everywhere Girl.”

🕧✏️🖋📈
Ann Lee (1999 - 2003)
Together with Pierre Huyghe, Parreno acquired Ann Lee’s copyright by paying 46000 Yen to a design character company called ‘K’ Works. “Ann Lee’s was cheap. Designed to join any kind of story, but with no chance to survive”. After being sold, Parreno and Huyghe gave Ann Lee a second life by passing her onto other artists who created their own story lines for the character – free of charge.
The project was called ‘No Ghost Just a Shell’ and ran from 1999 until 2003. Then, Parreno and Huyghe decided that the copyright of this fictional character be assigned to her so she no longer could be exploited. A lawyer in New York drew up an official contract.
“She’s a polyphonic character . . . What’s interesting about this manga gure is that it’s a way to tell a story. A sexual story? You can use her. A dark story? You can use her. A nice story? You can use this character. She’s almost like a tool.”
As this quote suggests, Huyghe seems unsure as to the gender of Annlee, who is alternately a “she,” an “it,” and (almost) a “tool.” Is Ann Lee an object, or is she just objectified?
- Pierre Huyghe, Two Minutes Out of Time, 2000.
- Escaping Ghost
- Girlhood and the plastic Image by Heather Warren Crow

🕜👽
Suzanne (2002)
In 2002 Not a Number Technologies (NaN), the company that was behind Blender, went bankrupt. Nevertheless, they put out one more release, 2.25. As a sort-of easter egg, and last personal tag, the artists and developers decided to add a 3D model of a chimpanzee head, although it is called a "monkey" in the software, who they named Suzanne.
Suzanne is still included in Blender, which was later saved. The largest Blender contest gives out an award called the Suzanne Award.

🕝🐒

Hannah Stellar / Parked Domain Girl (2005)
Dunstin Steller took this photo of his sister, Hannah Stiller, and tossed it onto his iStockPhoto portfolio. For a few cents, Demand Media scooped up the photo and was then licensed to use it throughout their web properties. Every time a website goes dark, Demand Media scoops up the domain registration and parks it, with ads and links around this photo. The file is usually saved with the name "0012_female_student.jpg".
"Parked domain girl” or “the Expired Domain Girl“ has since spurred a little online fandom.
“Is it even her own at this point? Is it recognisable by a significant number of folk? Is this image ubiquitous (enough)? I’d argue the face/image is ours more than hers at this point.”
- Emilie Gervais: Parked Domain Girl Tombstone
- Parker Ito: Parked Domain Girl

🕞🌐💤
Fabio (2012)
In 2012 Deanna Needell and Rachel Ward got permission from the agent of Fabio Lanzoni to use the popular male model’s likeness rather than use Lena.
- Deanna Needell and Rachel Ward: Lena vs Fabio
- Stable image reconstruction using total variation minimization
🕟🧺 🧷
Render Ghost
(the only identified ghost: Olson Twin) (2013)
A collection of portraits that fill the 3D renders of future architecture, captured from billboards on the streets of London. James Bridle states: “The Render Ghosts are the people who live inside our imaginations, in the liminal space between the present and the future, the real and the virtual, the physical and the digital. A world of architecture, urbanism and the city before it is completed - which is also never. They inhabit a space which exists only in the virtual spaces of 3D computer rendering software, projected onto billboards, left to rot and torn down when the actual future arrives; never quite as glossy or as perfect as our renderings of it would like it to be, or have prepared us for.”
The question is have you seen these people? Did they give permission for their images to inhabit these architectures?
- The Render Ghosts James Bridle
- Render Search

🕠🏗👻
Ariwasabi aka Ariane (2014)
You MUST have seen her in the last years! “Ariane is so ubiquitous, she has probably entered your subconscious at some point”
- Shutterstock
- The photomodel that wanted to be anonymous

🕡🏧👤
Tay (2016)
(Zach Blas version)
Tay was a chat bot based on artificial intelligence,originally released by Microsoft via Twitter on March 23, 2016. But Tay caused subsequent controversy when she began to post inflammatory and offensive tweets through its Twitter account, causing Microsoft to shut down the service only 16 hours after its launch.
- I am here to learn so by Zach Blas

🕢🔇⛔️
The Other Nefertiti (2015)
An artistic intervention by Jan Nikolai Nelles and Nora Al- Badri.
By orchestrating a leak of the 3d file information of the Nefertiti bust, they wished to “activate the artefact, to inspire a critical re-assessment of today’s conditions and to overcome the colonial notion of possession in Germany's museums.”
With regard to the notion of belonging and possession of material objects of other cultures, the artists intention is to make cultural objects publicly accessible and to promote a contemporary and critical approach on how the “Global North“ deals with heritage and the representation of “the Other”.
We should tell stories of entanglement and Nefertiti is a great case to start with to tell stories from very different angles and to see how they intertwine.
- The other Nefertiti by Nora Al-Badri & Jan Nikolai Nelles

🕣🇪🇬🖨
Actroid: Ashley from Oddcasts (2019) 
Ashley is “a virtual nurse for human patients. Her appearance and voice were developed by ODDCAST, a provider of text-to-speech software.
[...] The possibility of discrimination on the basis of simulated femininity has long been part of the code of female impersonators. Alexa has a plethora of responses should she face harassment.” In her installation OFFREAL, Malin Kuht gives the virtual assistants their say.
- Malin Kuth: OFFREAL (2019)

🕤🎚🎙
A Ghost for De-Calibration (2019)
Soon after the release of a Vernacular of File Formats (2010), a sequence of nonconsensual, non-attributed instances of exploitation appeared: my face became an embellishment for cheap internet trinkets such as mugs and sweaters and application buttons proprietary glitch softwares. The image, exploited by artists and creators alike, started to lose its connection to the source—to me—and instead became the portrait of no one in particular, a specter, similar to a Shirley test image, though in this case, a Shirley for de-calibration.
Last May (2019) I was asked to lend the image to Vogue (the US fashion magazine). I took this as an opportunity to reclaim the image in the only way I could imagine possible - by renaming the portrait formally known as “Blinx (from a Vernacular or File Formats)” to “A Ghost for De-calibration”. With this small, probably to most invisible action, I wish to take a stand against the discourse of color test cards and promote the consideration and creation of alternative cards and resolutions.
- De-Calibration Card

🕥🌀📏🏴‍☠️
365 Perfect decalibration (2019-2020)
365 perfect is “the best FREE virtual makeup app, period. It’s like having a glam squad in your pocket!”
The options in the app include virtual photo make-up, which has filters such as ‘delete blemishes’, or ‘brighten and soften skin’. It can also deepen the smile, put lipstick or even lip tatoos, enlarge the eyes and make the face slimmer, lift the cheeks, enhance the nose, or resize the lips. And lets not forget to whiten the teeth, add eyelashes, eye liner and eyebrows and finally change the hairstyle.
This sticker is created by using the app to make myself perfect. not once, not twice, but hundreds of times over, more perfection one year along.
If everyday I could get just one shade lighter, slimmer cheeks or bigger eyes... how perfect would I become?
By re-saving my newly beautified face every iteration, the artifacts of a re-compressed JPEG and the absurdity of our beautifying standards are amplified.
- 365 Perfect

🕦✅💚🦠

 


This i.R.D. Perfect De/Calibration Army tele-chorale was rendered especially for
Technofetishism: Whip it into Shape. March 21, 2025 - August 31, 2025. MOMus, Thessaloniki, Greece.


The i.R.D. Perfect de/Calibration Army was formed with the help of Perfect365, an app for the mobile phone that describes itself as:

“the best FREE virtual make-up app, period. It’s like having a glam squad in your pocket”

The free app offers virtual photo make-up, including filters such as ‘delete blemishes’ and ‘brighten or soften skin’.  It can also deepen the smile, add virtual lipstick tatoos, enlarge the eyes and make the face slimmer, lift the cheeks, enhance the nose, or resize the lips. Give yourself fat duck lips. just to name a few of those perfections. 

I used the app on the most wide spread White Shadows; the caucasian ladies that were used as color test cards for image processing - not once, not twice, but hundreds of times over and over ...
... so much perfection ...

With every tuning of the face, the portraits would turn just one shade lighter, slimmer or smoother... until the newly beautified faces would move from exaggeration to showcasing the absurdity of beautifying standards.


Seven one of a kind archival prints that each come with an embedded AR video. The prints are embedded in a custom build list of wood that has a test target extruding out of it.

1. Decalibrated Self portrait with USAF1951 Resolution test Tri-bars [46 x 58 cm including black square]
2. Decalibrated Self portrait with quarter Siemens Star and DIY Test Resolution Target for Pixel Resolution [39 x 45 cm with extruding DIY Test Resolution Target]
3. Decalibrated Self portrait with Facial Recognition crosshair [47 x 38 cm including extruding crosshair]
4. Decalibrated Self portrait with Ronchigrams for mirror curvature testing [45 x 58 cm including Ronchigrams]
5. Decalibrated Ariane Shutterstock model with Modulation Transfer Function Tests, Fitz Patrick Scale, Discrete Cosine Transform and Quadrant Pattern registration marks [41 x 37 cm including extruding Modulating Transfer Function Test]Measurements approximate and do not include custom build list
6. La Inconnue de la Seine (aka: CPR ANNIE / ANNIE R U OK?) with noise stripe (42.16 x 55.21 cm) 
7. Lena JPEG with DCT extrusion  (46.06 x 46.06 cm)
7. Tay Tweets with Facial Recognition crosshair (55 x 55 cm)Measurements approximate and do include custom build list