&nbrs;Beyond Resolution

 


Beyond Resolution (2020)
Published on the occaision of my solo show at SJSU art galleries (3th of March - May 15th 2020): Shadow Knowledge Beyond Resolution is downloadable as pdf here
Menkman, Rosa. Beyond Resolution (i.R.D.: 2020).
ISBN: 978-90-828273-0-9


In Beyond Resolution, Menkman insists on an extended formulation of resolution. A resolution is not just a trade-off between settings that manage speed and functionality, while considering materials affordances.
A resolution also always involves the inherent compromise of other ways of rendering, and it is through these other
ways - these alternative but not implemented resolutions - that we need to train to see, run and formulate our alternatives. With the example of the genealogy of the color test cards, Menkman offers an exemplary way to make such latent and biased power structures more apparent.

In Beyond Resolution, Menkman insists that these standard images, trapped in the histories of our technologies, become part of the public domain. In order to illuminate the white shadows that govern the outcomes of our image processing technologies, we must document the genealogies of standardization. These genealogies belong in high school textbooks: the latent violence coded within such norms should be studied as part of standard curricula to inform a future generation of engineers of compromises made in the past.

This independently published book consists of a collection of different types of texts ranging from short stories to an introduction into basic optics and a manifesto like text.

The publication is accompanied by a collection of artworks that Menkman developed during a triptych of solo shows (institution of Resolution Disputes, Behind White Shadows and Shadow Knowledge), all geared towards introducing and developing the concept: "Resolution Studies".



[ Resolution Studies ]
Refuse to let the syntaxes of (a) history direct our futures.
An introduction to Resolution || Presentations and class materials


Abstract

This essay starts with the description of a pioneering work of video art by Jon Satrom: QTzrk (2011). With the help of this case study Menkman illustrates how certain digital video setting, or resolutions, are not supported unilaterally, but could have changed our entire understanding of the medium of video. The case study serves as an introduction to: "Resolution Studies”, a proposal for a theory that involves more than just the calculation of resolutions as described in physics. In essence resolution studies is about literacy: literacy of the machines, the people, the people creating the machines, and the people being created by the machines. But resolution studies does not only involve the study of the effects of technological progress or the aesthetization of the scales of resolution. Resolution studies also involves a research on alternative settings that could have been in place, but are not – and the technologies which are, as a result, rendered outside of the discourse of computation.






Screenshot of Satrom, Jon. QTzrk (2011).
Technically, QTzrk (Jon Satrom, video installation, 2011) consists of two main video ‘elements’. The first element, a 16:9 video, is a desktop video – a video captured from the desktop perspective – and features ‘movie.mov’. Movie.mov is shown on top of the desktop, an environment which gets deconstructed during QTzrk. The second type of video element are the QTlets – smaller, looped videos that are not quadrilateral. The QTlets are constructed and opened via a now obsolete option in the Quicktime 7 software: Satrom used a mask to change the shape of the otherwise four-cornered videos and transformed them into ‘video shards’. The QTlet elements are featured in QTzrk but are also released as standalone downloadables, available on Satrom’s website; however, they no longer play properly on recent versions of Mac OS X (the masking option is now obsolete and playing these files significantly slows down the computer).

Story-wise, QTzrk starts when the movie.mov file is clicked. It opens inside a Quicktime 7 player, on top of what later becomes clear is a clean desktop that is missing a menu bar. Movie.mov shows a slow motion nature video of a Great White Shark, jumping out of the ocean. Suddenly, a pointer clicks the pause button of the interface, and the Great White Shark turns into a fluid video, leaking out of the Quicktime 7 movie.mov interface buttons. The shark folds up in a kludgy pile of video, resting on the bottom of the desktop, still playing but now in a messily folded way. The Quicktime 7 movie.mov window changes into what looks like a ‘terminal’, which is then commanded to save itself as a QTlet named ‘shark_pile’. The shark_pile is picked up by a mouse pointer, which kind of performs like an invisible hand, hovering the pile over the desktop, finally dropping it back into the Quicktime window, which now shows line after line of mojibake-data. This action – dropping the shark_pile inside the Quicktime window – seems to be the trigger for the desktop environment to collapse.

The Quicktime player breaks apart, no longer adhering to its quadrilateral shape but taking the shape of a second, downloadable QTlet. On the desktop, 35 screenshots of the shark frame appear. A final, new QTlet is introduced; this one consists of groups of screenshots, which when opened up show glitched png files. These clusters themselves transform into new video sequences (a third downloadable QTlet), adding more layers to the collage. By now the original movie.mov seems to slowly disappear in the desktop background, which itself features a ‘datamoshed’ shark video (datamoshing is the colloquial term for the purposeful deconstruction of an .mpeg, or any other video compression using intraframe/keyframe standard). After a minute of noisy droning of the QTlets on top of a datamoshed shark, the desktop suddenly starts to zoom out, revealing one Quicktime 7 window inside the next. Finally the cursor clicks the closing button of the Quicktime 7 interface, ending the presentation and revealing a clean white desktop with just the one movie.mov file icon in the middle. Just when the pointer is about to reopen the movie.mov file, and start the loop all over again, QTzrk ends.

TITLE: QTzrk
DIMENSIONS: expandable/variable
MATERIALS: QuickTime 7
YEAR: 2011
PRICE: FREE; Instructions, TXT, & download available (here)




Wendy Chun: Updating to Remain the Same (2016)

Alex Galloway: The Interface Effect (2012)

I wish I could open Google image search, query ‘rainbow’ and just listen to any image of a rainbow Google has to offer me. I wish I could add textures to my fonts and that I could embed within this text videos starting at a particular moment, or a particular sequence in a video game. I wish I could render and present my videos as circles, pentagons, and other, more organic manifolds.
If I could do these things, I believe my use of the computer would be different: I would create modular relationships between my text files, and my videos would have uneven corners, multiple timelines and changing soundtracks. In short, I think my computational experience could be much more like an integrated collage, if my operating system would allow me to make it so.


Moving Beyond Resolution.

In 2011, Chicago glitch artist Jon Satrom released his QTzrk installation. The installation consisting of four different video loops offers a narrative that introduced me both to the genre of desktop film and to non-quadrilateral video technology. As such, it left me both shocked and inspired. So much so that, in 2012, inspired by Satrom’s work, I set out to build Compress Process, an application that would make it possible to navigate video inside a 3D environment. I too wanted to stretch the limits of video, especially beyond its quadrilateral frame. However, upon release of Compress Process, Wired magazine reviewed the video experiment as a ‘flopped video game’. Ironically, the Wired reporter could not imagine video existing outside the confines of its traditional two-dimensional, flat and standardized interface. From his point of view, this other resolution – 3D – meant an imposed re-categorization of the work; it became a gaming application and was reviewed (and burned) as ‘a flop’. (1)

This kind of imposition of the interface (or the interface effect) is extensively described by NYU new media professor Alexander Galloway in his book The Interface Effect.(2) Galloway writes that ‘an interface is not a thing, an interface is always an effect. It is always a process or a translation.’ The interface thus always becomes part of the process of reading, translating and the understanding of our mediated experiences. Thinking through this reasoning can also give an explanation of the reaction of Klatt in his Wired review of Compress Process, and it makes me wonder: is it at all possible to escape the normative or habitual interpretation of the interface?

As Wendy Hui Kyong Chun writes: ‘New media exist at the bleeding edge of obsolescence. […] We are forever trying to catch up, updating to remain (close to) the same.’(3) Today, the speed of the upgrade has become incommensurable; new upgrades arrive too fast and even seem to exploit this speed as a way to obscure their new options, interfaces and (im)possibilities. Because of the speed of the upgrade, remaining the same, or using technology in a continuous manner, has become a mere ‘goal’. Chun’s argument seems to echo what Deleuze already described in his Postscript on the Societies of Control:

Capitalism is no longer involved in production […] Thus it is essentially dispersive, and the factory has given way to the corporation. The family, the school, the army, the factory are no longer the distinct analogical spaces that converge towards an owner – state or private power – but coded figures – deformable and transformable – of a single corporation that now has only stockholders. […] The conquests of the market are made by grabbing control and no longer by disciplinary training, by fixing the exchange rate much more than by lowering costs, by transformation of the product more than by specialization of production.(4)

In other words; the continuous imposition of the upgrade demands a form of control over the user, leaving them a sense of freedom, while actually becoming more and more restricted in their practices.  

Today, the field of image processing forces almost all formats to follow quadrilateral, ecology dependent, standard (re)solutions, which result in tradeoffs (compromises) between settings that manage speed and functionality (bandwidth, control, power, efficiency, fidelity), while at the same time considering claims in the realms of value vs. storage, processing and transmission. At a time when image-processing technologies function as black boxes, we desperately need to research, reflect and re-evaluate our technologies of obfuscation. However, institutions – schools and publications alike – appear to consider the same old settings over and over, without critically analyzing or deconstructing the programs, problems and possibilities that come with our newer media. As a result, there exists no support for the study of (the setting of) alternative resolutions. Users just learn to copy the use of the interface, to paste data and replicate information, but they no longer question, or learn to question, their standard formats.

Mike Judge made some alarming forecasts in his 2006 in his science fiction comedy film Idiocracy, which, albeit indirectly, echo the words by science fiction writer Philip K. Dick already in his dystopian short story ‘Pay for the Printer’ (1956): in an era in which printers print printers, slowly everything will resolve into useless mush and kipple. In a way, we have to realise that if we do not approach our resolutions analytically, the next generation of our species will likely be discombobulated by digital generation loss – a loss of fidelity in resolutions between subsequent copies and trans-copies of a source. As a result, daily life will turn into obeying the rules of institutionalized programs, while the user will end up only producing monotonous output.

In order to find new, alternative resolutions and to stay open to refreshing, stimulating content, I need to ask myself: do I, as a user, consumer and producer of data and information, depend only on my conditioning and the resolutions that are imposed on me, or is it possible for me to create new resolutions? Can I escape the interface, or does every decontextualized materiality immediately get re-contextualized inside another, already existing, paradigm or interface? How can these kinds of connections block or obscure intelligible reading, or actually offer me a new context to resolve the information? Together these questions set up a pressing research agenda but also a possible future away from monotonous data. In order to try to find an answer to any of these questions, I will need to start at the beginning: an exposition of the term ‘resolution’.


An Etymology of Resolution
The meaning of the word resolution depends greatly on the context in which it is used. The Oxford dictionary for instance, differentiates between an ordinary and a formal use of the word, while it also lists definitions from the discourses of music, medicine, chemistry, physics and optics. What is striking about these definitions is that they read not just diverse, but at times even contradictory. In order to come to terms with the many different definitions of the word resolution, and to avert a sense of inceptive ambiguity, I will start this chapter with a very short etymology and description of the term. The word resolution finds its roots in the Latin word re-solutio and consists of two parts: re-, which is a prefix meaning again or back, and solution, which can be traced back to the Latin action noun solūtiō (“a loosening, solution”), or solvō (“I loosen”). Resolution thus suggests a separation or disentanglement of one thing from something it is tied up with, or “the process of reducing things into simpler forms.”(1) The Oxford Dictionary places the origin of resolution in late Middle English where it was first recorded in 1412, as resolucioun (“a breaking into parts”), but also references the Latin word resolvere.(2) Douglas Harper, historian and creator of the Online Etymology Dictionary, describes a kinship with the fourteenth-century French word solucion, which translates to division, dissolving, or explanation.(3) Harper also writes that around the 1520s the term resolution was used to refer to the act of determining or deciding upon something by “breaking (something) into parts to arrive at a truth or to make a final determination.” Following Harper, Resolving, in terms of “a solving” (of a mathematical problem) was only first recorded in the 1540s, as was its usage when meaning “the power of holding firmly” (resolute).(4) This is where to “pass a resolution” stems from (1580s).(5) Resolution in terms of a “decision or expression of a meeting” is dated at around 1600, while a resolution made around New Year, generally referring to the specific wish to better oneself, was first recorded around the 1780s. When a resolution is used in the context of a formal, legislative, or deliberative assembly, it refers to a proposal that requires a vote. In this case, resolution is often used in conjunction with the term motion, and refers to a proposal (also connected to “dispute resolution”).(6) So while in chemistry resolution may mean the breaking down of a complex issue or material into smaller pieces, resolution can also mean the finding of an answer to a problem (in mathematics) or even the deciding of a firm, formal solution.(7) This use of the term resolution – a final solution – seems to oppose the older definitions of resolution, which generally signify an act of breaking down. Etymologically however, these different meanings of the term all still originate from the same root. Douglas Harper dates the first recording of resolution referring to the “effect of an optical instrument” back to the 1860s.(8) The Oxford Dictionary does not date any of the different uses of the term, but it does end its list of definitions with: “5) The smallest interval measurable by a telescope or scientific instrument; the resolving power. 5.1) The degree of detail visible in a photographic or television image.”(9)


--------- --------- --------- --------- --------- ---------
(1) "Resolution." Douglas Harper: Online Etymology Dictionary. Accessed: January 30, 2018. <http://dictionary.reference.com/browse/resolution>.

(2) "Resolution." Dictionaries, Oxford. Oxford University Press. Accessed: January 30, 2018. <http://www.oxforddictionaries.com/definition/english/resolution>.

(3) "Solution." Douglas Harper: Online Etymology Dictionary. Accessed: January 30, 2018. <http://www.etymonline.com/index.php?term=solution>.

(4) "Resolution." Douglas Harper: Online Etymology Dictionary. Accessed: January 30, 2018. <http://dictionary.reference.com/browse/resolution>.

(5) "Resolve." Douglas Harper: Online Etymology Dictionary. Accessed: January 30, 2018. <http://dictionary.reference.com/browse/resolve>.

(6) Shaw, Harry. Dictionary of problem words and expressions. McGraw-Hill Companies: 1987. Accessed: January 30, 2018. <http://problem_words.enacademic.com/1435/resolution,_motion>.

(7) 1) A firm decision to do or not do something, 1.1) A formal expression of opinion or intention agreed on by a legislative body or other formal meeting, typically after taking a vote 2) The quality of being determined or resolute 3) The action of solving a problem in a contentious matter 3.1) Music: The passing of a discord into a concord during the course of changing harmony: 3.2) Medicine: The disappearance of a symptom or condition 4) Chemistry: The process of reducing or separating something into constituent parts or components 4.1) Physics: The replacing of a single force or other vector quantity by two or more jointly equivalent to it. 5) The smallest interval measurable by a telescope or scientific instrument; the resolving power. 5.1) The degree of detail visible in a photographic or television image.
Angus Stevenson (ed.): Oxford Dictionary of English. Third edition, Oxford 2010: p. 1512.

(8) "Resolution." Douglas Harper: Online Etymology Dictionary. October 30, 2015. <http://dictionary.reference.com/browse/resolution>.

(9) Oxford Dictionary of English. Edited by Angus Stevenson. third edition, Oxford University Press. 2010. p. 1512.




Rayleigh phasing
Decalibated Ronchi Rulings
Optical Resolution
In 1877, the English physicist John William Strutt succeeded his father to become the third Baron Rayleigh. While Rayleigh’s most notable accomplishment was the discovery of the inert (not chemically reactive) gas argon in 1895, for which he earned a Nobel Prize in 1904, Rayleigh also worked in the field of optics. Here he wrote a criterion that is still used today in the process of quantifying angular resolution: the minimum angle at which a point of view still resolves two points, or the minimum angle at which two points become visible independently from each other.(5) In an 1879 paragraph titled ‘Resolving, or Separating, Power of Optical Instruments’, Lord Rayleigh writes: ‘According to the principles of common optics, there is no limit to the resolving power of an instrument.’ But in a paper written between 1881 and 1887, Rayleigh asks: ‘How is it […] that the power of the microscope is subject to an absolute limit […]? The answer requires us to go behind the approximate doctrine of rays, on which common optics is built, and to take into consideration the finite character of the wave-length of light.’(6)

When it comes to straightforward optical systems that consider only light rays from a limited spectrum, Rayleigh was right: in order to quantify resolution of these optical systems, contrast, the amount of difference between the maximum and minimum intensity of light visible within the space between two objects, is indispensable. Just like a white line on a white sheet of paper needs contrast to be visible (to be resolved), it will not be possible to distinguish between two objects when there is no contrast between these two objects. Contrast between details defines the degree of visibility, and thus resolution: no contrast will result in no resolution.

But the contrast between two points, and thus the minimum resolution, is contingent on the wavelength of the light and any possible diffraction patterns between those two points in the image. This ring-shaped diffraction pattern of a point (light source), known as an Airy Pattern, named after George Biddell Airy, is the result of diffraction and is characterized by the wavelength of light illuminating a circular aperture. When two point lights are moved into close proximity, so close that the first Airy disk’s zero crossing falls inside the second Airy disk’s zero crossing, the oscillation within the Airy Patterns will cancel most contrast of light between them. As a result, the two points will optically be blurred together, no matter the lens’s resolving power. The diffraction of light thus results in the fact that even the biggest imaginable telescope has limited resolving power.

Rayleigh described this effect in his Rayleigh criterion, which states that two points can be resolved when the centre of the diffraction pattern of one point falls just outside the first minimum diffraction pattern of the other. When considered through circular aperture, he states that it is possible to calculate a minimum angular resolution as:
θ = 1.22 λ / D

In this formula, θ stands for angular resolution (which is measured in radians), λ stands for the wavelength of the light used in the system (blue light has a shorter wavelength, which will result in a better resolution), and D stands for the diameter of the lens’s aperture (the hole with a diameter through which the light travels). Aperture is a measure of a lens’s ability to gather light and resolve fine specimen detail at a fixed object distance.

As stated before, an angular resolution is the minimum distance between two points (light sources) required to stay individually distinguishable from each other. Here, a smaller resolution means there is a smaller resolution angle (and thus less space) necessary between the resolved dots. However, real optical systems are complex and suffer from aberrations, flaws in the optical system and practical difficulties such as specimen quality. Besides this, in reality, most often two dots radiate or reflect light at different levels of intensity. This means that in practice the resolution of an optical system is always higher (worse) than its calculable minimum.

All technologies have a limited optical resolution, which depends on, for instance, aperture, wavelength, contrast and angular resolution. When the optical technology is more complex, the actors that are involved in determining the minimal resolution of the technology become more diverse and the setting of resolution changes into a more elaborate process. In microscopy, just like in any other optical technology, angular and lateral resolution refer to the minimum amount of distance needed (measured in rads or in metres) between two objects, such as dots, that still make it possible to just tell them apart. However, a rewritten mathematical formula defines the theoretical resolving power in microscopy as:
dmin = 1.22 x λ / NA

In this formula, dmin stands for the minimal distance two dots need from each other to be resolved, or minimal resolution. λ stands again for the wavelength of light. In the formula for microscopy, however, the diameter of the lens’s aperture (D) is swapped with NA, or Numerical Aperture, which consists of a mathematical calculation of the light-gathering capabilities of a lens. In microscopy, this is the sum of the aperture of an objective and the diaphragm of the condenser, which have set values per microscope. Resolution in microscopy is thus determined by certain physical parameters that not only include the wavelength of light, but also the light-gathering power of objective and lenses.

The definition of resolution in this second formula is expanded to also include the attributed settings of strength, accuracy or power of the material agents that are involved in resolving the image, such as the objective, condenser and lenses. At first sight, this might seem like a minimal expansion and lead to the dismissal of a simple rephrasing or rewriting of the earlier formula for angular resolution. However, the expansion of the formula with just one specific material agent, the diaphragm, and the attribution of certain values of this material agent (which are often determined in increments rather than a fluid spectrum of values) is actually an important step that illustrates how technology gains complexity. Every time a new agent is added to the equation, the agent introduces complexity by adding their own rules or possible settings, involving or influencing the behaviour of all other material agents. Moreover, the affordances of these technologies, or the clues inherent to how the technology is build to tell a user how it can or should be used, also play a role that intensifies the complexity of the resolution of the final output. As James J. Gibson writes: ‘[…] affordances are properties of things taken with reference to an observer but not properties of the experiences of the observer.’(7)

In photography, for instance, the higher the aperture, the shallower the depth of field, the closer the lens needs to come to the object. This also introduces new possibilities for failure: if the diaphragm does not afford an appropriate setting for a particular equation, it might not be possible to resolve the image at all – the imaging technology might simply refuse or even state an ‘unsupported setting’ error message; in which case the technological assemblage will refuse to resolve an image entirely - the foreclosure of an abnormal option rather than an impossibility. Thus: the properties of the technological assemblage that the user handles, the affordances, add complexity to the setting of a resolution.




Aspect Ratio, Resolution and Resolving power
In optical systems, the quality of the rendered image depends on the resolving power and acutance of the technological assemblage that renders the image; the (reflected) light of the source or subject that is captured, the context and conditions in which the image is recorded. Consider, for instance, how different objects (lens, film, image sensor, compression algorithm) have to weigh (or dispute) between standard settings (frame rate, aperture, ISO, number of pixels and pixel aspect ratio, color encoding scheme or weight in mbps), while having to evaluate the technologies’ possible affordances; the possible settings the mediating technological architecture offers when connecting or composing these objects and settings. Finally, the resolving power is an objective measure of resolution, which can, for instance, be measured in horizontal lines (horizontal resolution) and vertical lines (vertical resolution), line pairs or cycles per millimetre. The image acutance refers to a measure of sharpness of the edge contrast of the image and is measured following a gradient. A high acutance means a cleaner edge between two details while a low acutance means a soft or blurry edge.

Following this definition of optical resolution, digital resolution should – in theory – also refer to the pixel density of the image on display, written as the number of pixels per area (in PPI or PPCM) and maybe extended to consider the apparatus, its affordances and and settings (such as pixel aspect ratio or color encoding schemes). However, in an everyday use of the term, the meaning of digital resolution is constantly confused or conflated to simply refer to a display’s standardized output or graphics display resolution: the number of distinct pixels the display features in each dimension (width and height). As a result, resolution has become an ambiguous term that no longer reflects the quality of the content that is on display. The use of the word ‘resolution’ in this context is a misnomer, since the number of pixels in each dimension of the display (e.g. 1920 × 1080) says absolutely nothing about the actual pixel density, the pixels per unit or the quality of the content on display, which may in fact be shown zoomed, stretched or letter-boxed and wrongly color encoded, to fit the display’s standard display resolution.(8)

Generally, these settings either ossify as requirements or de facto norms, or are notated as de jure – legally binding – standards by organizations such as the International Organization for Standardization (ISO). While this makes the process of resolving an image less complex, since it systematizes parts of the process, ultimately it also makes the process less transparent and more black-boxed. And it is not only institutions such as ISO that program, encode and regulate (standardize) the flows of data in and between our technologies, or that arrange the data in our machines following systems that underline efficiency or functionality. In fact, data is organized following either protocol or proprietary standards developed by technological oligarchs to include all kinds of inefficiencies that the user is not conditioned or even supposed to see, think or question. These proprietary standards function as a type of controlling logic that re-capsulate information inside various wrappers in which our data is (re-)encoded, edited and even deformed by nepotistic, (sometimes) covertly operating cartels for reasons like insidious data collection or locking the user into their proprietary software.

Just like in the realm of optics, a resolution does not just mean a final rendition of the data on the screen, but also involves the processes and affordances involved during its rendition – the tradeoffs inside the technological assemblage which record, produce and display the image (or other media, such as video, sound or 3D data). The current conflation of the meaning of resolution within the digital – as a result of which resolution only refers to the final dimensions the image is displayed at or in – obscures the complexities and politics at stake in the process of resolving, and, as a result, presents a limit to the understanding, using, compiling and reading of (imaging) data. Further theoretical refinements that elaborate on the usage and development of the term ‘resolution’ have been missing from debates on resolutions since it was ported from the field of optics, where it has been in use for two centuries. To garner a better understanding of our imaging technologies, the word ‘resolution’ itself needs to be resolved; or rather, it needs to be disentangled to refer not just to a final output, but to a more procedural construct.


Untie&&Dis/Solve: Digital Resolutions
Resolutions are made, and they involve procedural trade-offs. The more complex an image-processing technology is, the more actors its rendering entails, each following their own rules or standards to resolve an image, influencing the image’s final resolution. However, these actors and their inherent complexities are increasingly positioned beyond the fold of everyday settings, outside the afforded options of the interface. This is how resolutions do not just function as an interface effect but also as a hyperopic lens, obfuscating some of the most immediate stakes and possible alternative resolutions of media.

Unknowingly, the user and audience suffer from technological hyperopia; a condition of ‘farsightedness’ that does not allow the user to properly see what processes are taking place right under their nose. Rather, they just focus on a final end product. This is due to a shift from creating a resolution, to the setting of a resolution and, finally, to the imposition of resolutions as standard settings. Every time we press print, run, record, publish, render, we also press a button we could consider as ‘compromise’. Unfortunately, however, what we compromise – settings between or beyond standards, and which deliver other, maybe unwanted, but maybe also desirable outcomes – is often obfuscated. We need to shift our understanding of resolutions and see them as disputable norms, habitual compromises and expandable limits. Through challenging the actors involved in the setting of resolutions, the user can scale actively between increments of hyperopia and myopia. The question is: has the user become unable to construct their own settings, or has the user become oblivious to resolutions and their inherent compromises? And how has the user become this blind?

One answer to this question can be found in a redefinition of the screen. For a long time the screen was just a straightforward, typically passive object that acted as a veil: it would emit or reflect light. Today, the term ‘screen’ may still refer to its old configuration: a two-dimensional material surface or threshold that partitions one space from the next, or functions as a shield. As curator and theorist Christiane Paul writes: the screen acts as a mediator of (digital) light. However, over the past decades, technological innovations have transformed the notion of the screen into a wide variety of complex amalgamations.(9)

But over time, the screen has transformed into a navigational plane, rendering it similar to an interface or GUI. While media archaeologist Erkki Huhtamo dabbles about the possibility to describe the contemporary screen as a framed surface or container for displaying visual information, that is controlled by the user and therefore not permanently part of the frame. He finally argues that the screen exists as a constantly changing, temporally constructed interface between the user and information.(10) As Galloway explains in The Interface Effect (2012), the interface is part of the processes of understanding, reading and translating our mediated experiences: it operates as an effect. In his book The Interface Envelope (2015), James Ash writes: ‘within digital interfaces, the specific mode of resolution of an object is carefully designed and tested in order to be as optimal as possible […]. In a digital interface, resolution is the outcome of transductions between a variety of objects including screened images, software, hardware and physical input devices, all of which are centrally involved in the design of particular objects within an interface.’(11)  

Not only has the screen morphed from a flat surface to an interactive plane of navigation or interface, its content and the technologies that shape its content have developed into extremely complex systems. As Lev Manovich wrote back in 1995: ‘Rather than being a neutral medium of presenting information, the screen is aggressive. It functions to filter, to screen out, to take over, rendering nonexistent whatever is outside its frame’ – a degree of filtering that varies between different screen technologies. The screen is thus not simply a boundary plane. It has become an autonomous object that affects what is being displayed; a threshold mediating different systems or a process oscillating between states. The mobile screen itself is located in-between different applications and uses.(12)

In the computer, most of the interactions with our interfaces are mediated by software applications that act like platforms. These platforms do not take up the full screen, but instead exist within a window. While they all exist in the same screen, these windows follow their own sets of rules and standards; they exist next to and on top of each other like walled gardens. In a sense, these platforms are a modern version of frameworks; they offer a simulacrum of freedom and possibility. In the case of the platform Facebook, for example, content is reformatted and deformed: Facebook recompresses and reformats any posted data, text, sound or images, while it has rules for the number of characters, and what characters and compressions can be used or uploaded. Facebook has even designed its own emojis for the platform. In short, the platform Facebook enforces its own resolutions. It is important to realize that the screen is a constant state of assemblage: delimiting and demarcating our ways of seeing and instead expanding the axial and lateral resolution to layers that are usually obfuscated or uncharted.

It is imperative to rethink the definition of ‘resolution’ and expand it from a simple measure of acutance. Because what is resolved on the screen and what is not depends not just on the material qualities of the screen or display, or the signal it receives, but also on the processes, platforms, standards and interfaces involved in setting these measures, behind or in the depths beyond the screen or display.

So while in the digital realm, the term ‘resolution’ is often simplified to just mean a number – signifying, say, the width and height of a screen – the critical understanding of the term ‘resolution’ I propose also considers a depth (beyond its outcome). In this ‘depth’, beyond a screen (or when including audible technologies: membrane), protocols and other (proprietary) standards, together with the technological interfaces and the objects’ materialities and affordances, form a final resolution.

The cost of all of these media resolutions – standards encapsulated inside standard encapsulations – is that we have gradually become unaware of the choices and compromises they represent. We need to realize that a resolution is never a neutral settlement, but an outcome that carries historical, economical and political ideologies which once were implemented by choice. While resolutions compromise, obfuscate and obscure particular visual outcomes, the processes of standardization and upgrade culture as a whole also compromise particular technological affordances – creating new ways of seeing or perceiving – altogether. And it is these alternative technologies of seeing, or obscured and deleted settings, that also need to be considered as part of resolution studies.




A Rheology of Data
In 1941, the Argentinian writer Jorge Luis Borges published El Jardín de senderos que se bifurcan (The Garden of Forking Paths), containing the short story ‘The Library of Babel’. In this story, Borges describes a universe in the form of a vast library, containing all possible books following a few simple rules: every book consists of 410 pages, each page displays 40 lines and each line contains approximately 80 letters. Each book features any combinations of 25 orthographic symbols: 22 letters, a period (full stop), a comma and a space. While the exact number of books in the Library of Babel can be calculated, Borges says the library is ‘indefinite and perhaps infinite’.

The fascinating part of this story starts when Borges describes the behaviour of the visitors to the library. In particular, the Purifiers, who arbitrarily destroy books that do not follow the Purifiers’ rules of language or decoding. The word ‘arbitrarily’ is important here, because it references the fluidity of the library; the openness to different languages and other systems of interpretation. One book may, for instance, offer an index to the next book, or a system of decoding – a ‘bridge’ – to read the next. This provokes the question: how do the Purifiers know they did not just read the books in the wrong order? How can they be certain that they were not just lacking an index or a codex that would help them access the books to be purified (burned)?

When I learned about NASA’s use of sonification – the process of displaying any type of data or measurement as sound, or as it says on the NASA website: ‘the transmission of information via sound’, which the space agency uses to learn more about the universe – I realized that with the ‘right’ listening device, anything can be heard. Even a rainbow. This does not always mean it makes sense to the listener, but rather, it is significant for the willingness of contemporary space scientists to build bridges between different domains - something I later understood as a 'rheology of data'.

Rheology is a term from the realm of physics, or, to be more precise, from mechanics, where it is used to describe the flow of matter – primarily in a liquid state, but also ‘soft solids’, or solids that respond in a plastic way (rather than deforming elastically in response to an applied force). With a ‘rheology of data’ I thus mean a study of how data can be read and displayed in completely different forms, depending on the context – software or interface. A rheology of data facilitates a deformation and flow of the matter of data. It shows that there is a possibility to ‘push’ data into different applications, and to show data in different forms. Thinking in the 'rheology of data' became meaningful to me when I first ran the open source Mac OS X plugin technology Syphon (first released in 2010 as open video tap, but later developed by Anton Marini in collaboration with Tom Butterworth as Syphon). With the help of Syphon, I could suddenly make certain applications (Adobe After Effects, Modul8, Processing or Open Frameworks) share information, such as frames – full frame rate video or stills – with one another, in real time. Syphon allowed me to project my slides and video as textures on top of 3D objects (from Modul8 to Unity). The plugin taught me that my thinking in software environments or ‘walled gardens’ was flawed, or at least limiting. Software is much more interesting when I can leak and push my content through the walls of software, which normally work as a closed architectures. Syphon showed me that data is more fluid than the ways in which I am conditioned to perceive and use it; data can be resolved differently, if I make it so. And this is where I want to come back to Jon Satrom’s QTzrk. QTzrk is a name that is not supposed to be spoken, it is a name referencing computer language, leaking back into human spoken language. In QTzrk, Satrom already prefigures what Syphon facilitates: the video shows a video software (Quicktime 7) leaking frames, data, from its container into a fluid puddle of data.

In the realm of computation, though, there is still very little fluidity. And the Library of Babel still remains an asynchronous metaphor for our contemporary approach to computation. Computer programs only function when inserting certain forms of formatted data; data for which the machine has codecs installed. (The value of) RAW, non-formatted or unprocessed data is easily dismissed because it is often hard to open or read. There seems to be hardly any freedom in transgressing this. The insights I gained from reading about NASA opened a new approach in my computational thinking: I started teaching my students not just about the electromagnetic spectrum, but also about how NASA, through sonification and other transcoding techniques, could listen to the weather, hoping they would understand the freedom they can take in the processes of perceiving and interpreting data.

Only the contemporary Purifiers – software, its users and the computer system in general – enforce the rule that when data can be illegible, it must be invalid. Take, for instance, Satrom’s QTlets, which have, after a life span of a little over five years, at least in my OS become completely obsolete and unplayable. In reality, it simply means that I do not have the right decoder, which is no longer available and supported. In general, it means that the string of data is not run through the right program or read in the right language, which would translate its data into a legible form of information. Data is not solid; it can flow from one context or environment to the next, changing both its resolution and its meaning – which can be both a danger and a blessing in disguise.


Refuse to let the syntaxes of (a) history direct our futures
Resolution theory is a theory of literacy: literacy of the machines, the people, the people creating the machines, and the people being created by the machines. But resolution studies does not only involve the study of the effects of technological progress or the aesthetization of the scales of resolution, which has already been done in books such as Galloway’s The Interface Effect or Chun’s Updating to Remain the Same. Resolution studies also involves research on alternative settings that could have been in place, but are not, and the technologies and affordances that are, as a result, rendered outside the discourse of computation.

But how do we move more easily beside or beyond these generally imposed flows of data? How do we break away from our standard resolutions? First of all, we need to realize that a resolution is more than a formal solution; it involves a technological (dis)entanglement and also an inherent compromise – if something is resolved one way, it is not resolved or rendered in another way. Determinations such as standard resolutions are as dangerous as any other presumption; they preclude alternatives, and perpetuate harmful or merely kludged and kippled ways of running. Key to the study of resolutions is the realization that the condition of data is fluid. Every string of data is ambiguously promiscuous and has the potential to be manipulated into anything. This is how a rheology of data can take form, advocating fluidity in data transactions.

A resolution is the lens through which constituted materialities become signifiers in their own right. They resonate the tonality of the hive mind and constantly transform our technologies into informed material vernaculars. Technology is evolving faster than we as a culture can come to terms with. This is why determinations such as standards are dangerous; they can preclude an alternative. The radical digital materialist believes in informed materials (13): while every string of data is ambiguously fluid and has the potential to be manipulated into anything, every piece of information functions within an environment that encodes and decodes, contextualizes and embeds the data and in doing so, data gains meaning and becomes information . Different forms of ossification slither into every crevice of private life, while unresolved, ungoverned free space seems to be slipping away. There lies the power and danger of standardization.

The compression, software, platform, interface and finally hardware such as the screen or the projector all inform how a string of data is resolved; its presence, legibility and its meaning. They enforce and deform data into formatted or informed materials. The infrastructures through which we activate or access our data always engender distortions of our perception. But at the same time, because there is some freedom for the trickster using the fluidity of data, the meaning rightfulness or ‘factuality’ of data should always be open for debate.

We are in need of a re-(Re-)Distribution of the Sensible. Resolution studies is not only about the effects of technological progress or about the aesthetization of the scales of resolution. Resolution studies is the study of how resolution embeds the tonalities of culture, in more than just its technological facets. Resolution studies researches the standards that could have been in place, but are not. As a form of vernacular resistance, based on the concept of providing ambiguous resolutions, resolution studies employs the liminal resolution of the screen as a looking-glass. Here, ‘technological hyperopia’ – the condition of a user being unable to see clearly the processes that take place during the production of an image – is fractured and gives space to myopia, and vice versa.



(1) Klatt, Oliver. Compress Process in: Wired Germany, January 2013. p. 106-107.

(2) Galloway, Alexander R. The Interface Effect. Polity Press. 2012. p. 33.

(3) Chun, Wendy Hui Kyong. Updating to Remain the Same: Habitual New Media. MIT Press, 2016, p. 1.

(4) Gilles Deleuze, "Postscript on the Societies of Control", from. October 59, Winter 1992, MIT Press, Cambridge, MA, pp. 3-7.

(5) Rayleigh, Lord. "XXXI. Investigations in optics, with special reference to the spectroscope." The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 8.49 (1879): pp.261-274.

(6) Scientific Papers by John William Strutt, Baron Rayleigh. Vol II. 1881-1887. Cambridge at the University Press. 1900. p. 410.

(7) Gibson, James J. The theory of affordances. In: R. E. Shaw & J. Bransford (Eds.), Perceiving, Acting, and Knowing. Hillsdale, NJ: Lawrence Erlbaum Associates, 1977.
http://smithiesdesign.com/wp-content/uploads/2016/09/05-Gibson-Theory-of-affordances.pdf

(8) The importance of the resolution of color encoding has been described in detail by Carolyn Kane in her book: Chromatic algorithms: Synthetic color, computer art, and aesthetics after code. University of Chicago Press, 2014.

(9) Paul, Christiane. "Mediations of Light: Screens as Information Surfaces." in: Digital Light. ed. Cubitt, Sean, Daniel Palmer, and Nathaniel Tkacz. Open Humanities Press, 2015. p. 179 - 192.

(10) Huhtamo, Erkki. "The Pleasures of the Peephole: An Archaeological Exploration of Peep Media." Book of imaginary media: Excavating the dream of the ultimate communication medium (2006): 74-155. p. 6.

(11) Ash, James. The Interface Envelope: Gaming, Technology, Power. Bloomsbury Publishing USA, 2015. p. 35.

(12) Manovich, Lev. "An archeology of a computer screen." Kunstforum International 132 (1995). p. 125.

(13) Here I borrow the term informed material as used by Bensaude-Vincent, B. and I. Stengers in: A History of Chemistry. Cambridge, MA: Harvard University Press, 1996. Matter is not just reshaped mechanically through chemical research and development, but is transformed into informed material: “Instead of imposing a shape on the mass of material, one develops an ‘informed material’ in the sense that the material structure becomes richer and richer in information. Accomplishing this requires a detailed comprehension of the microscopic structure of materials, because it is playing with these molecular, atomic and even subatomic structures that one can invent materials adapted to industrial demands.” (1996: 206)
This term has earlier been ported to the realm of research into digital materials by Susan Schuppli in her lectures on the Material Witness.



Beyond Resolution
An introduction of my journey into resolution studies, as presented at #34C3, Leipzig, Germany || December 2017.

I opened the »institutions of Resolution Disputes« [i.R.D.] on March 28, 2015, as a solo show, hosted by Transfer Gallery in New York City. On September 9, 2017, its follow-up »Behind White Shadows« also opened in Transfer. At the heart of both shows lies research on compressions, with one central research object: the Discrete Cosine Transform (DCT) algorithm, the core of the JPEG (and other) compressions. Together, the two exhibitions form a diptych that is this publication, titled Beyond Resolution.

By going beyond resolution, I attempt to uncover and elucidate how resolutions constantly inform both machine vision and human perception. I unpack the ways resolutions organize our contemporary image processing technologies, emphasizing that resolutions not only organize how and what gets seen, but also what images, settings, ways of rendering and points of view are forgotten, obfuscated, or simply dismissed and unsupported.

The journey that Beyond Resolution also represents, started on a not so fine Saturday morning in early January 2015, when I signed the contract for a research fellowship to write a book on Resolution Studies. For this opportunity I immediately moved back from London to Amsterdam. Unfortunately, and out of the blue, three days before my contract was due to start, my job was put on hold indefinitely. Bureaucratic management dropped me into a financial and ultimately emotional black hole.

When I finally re-organised my finances, I went to the Mojave desert, to take some time. There, from the porch of a little cabin looking out over a dust road, I could feel the infrasound produced by bombs dropped on »Little Baghdad«, a Twentynine Palms military training ground just miles away on the slope of a hill. I became fascinated with these obscure military spaces – where things happened beyond my understanding, yet in my direct field of perception. It reminded me of Trevor Paglen’s book I Could Tell You But Then You Would Have to Be Destroyed by Me (2007). Paglen’s main field of research is mass surveillance and data collection. His work often deals with photography as a mode of non-resolved vision and the production of invisible images. This memory inspired me to re-start my research on resolutions, this time independently, under the title Beyond Resolution.
Both exhibitions came with their own custom patch – the i.R.D. patch (black on black) provided a key to the five encrypted institutions, while the patch for Behind White Shadows featured Lenna Sjööblom aka Lena Söderberg (glow in the dark on white) – a symbol for the ongoing, yet too often ignored racism embedded within the development of image processing technologies.
When I came back to Europe, I started my residency at Schloss Solitude and picked up a teaching position at Merz Akademie, where I developed a full colloquium around Resolution Studies as Artistic Practice. Here I shared and reworked Beyond Resolution with my students and extended it with new theory and practice.
In Beyond Resolution, the i.R.D. (institutions of Resolution Disputes) conduct research and critique the consequences of setting resolutions by following a pentagon of contexts: the effects of scaling and the habitual, material, genealogical, and tactical use (and abuse) by the settings, protocols, and affordances behind our resolutions.
In 2017, while teaching in another art school, within a – very traditional – department focused on painting and sculpture, two students separately asked me how art can exist in the digital, when there is an inherent lack of »emotion« within the material. One student even added that digital material is »cold.«
I responded with a counter question: Why do you think classic materials, such as paint or clay, are inherently warm or emotional? By doing so, I hoped to start a dialogue about how their work exists as a combination of physical characteristics and signifying strategies. I tried to make them think not just in terms of material, but in terms of materiality, which is not fixed, but emerges from a set of norms and expectations, traditions, rules and finally the meaning the artist and writer add themselves. I tried to explain that we – as artists – can play with this constellation and consequently, even morph and transform the materiality of, for instance, paint.
The postmodern literary critic Katherine Hayles re-conceptualized materiality as ‘the interplay between a text’s physical characteristics and its signifying strategies’. Rather than suggesting a mediums materiality as fixed in physicality, Hayles’ re-definition is useful because it «opens the possibility of considering texts as embodied entities while still maintaining a central focus on interpretation. In this view of materiality, it is not merely an inert collection of physical properties but a dynamic quality that emerges from the interplay between the text as a physical artifact, its conceptual content, and the interpretive activities of readers and writers.»
The students lacked an understanding of »materiality« in general – let alone of digital materiality. They had no analytical training to understand how materials work, or how digital media and platforms influence and program us speaking to our habits by using a particularly reflexive vernacular or dialect.
I was shocked; digital literacy is not trivial; it is a prerequisite for agency in our contemporary society. To be able to ignore or unsee the infrastructures that govern our digital technologies, and thus our daily realities, or to presume these infrastructures are ‘hidden’ or ‘magic,’ is an act reserved only for the digital deprived or the highly privileged.
I am convinced that (digital) illiteracy is the result of lacking education in primary and high school. Even if these students have not studied the digital previously, I would expects some grasp of analytical and hopefully experimental tools as part of their creative thinking strategies. This makes me wonder if todays teachers are not sufficiently literate themselves, or alternatively if we are not teaching our students the tools and ways of thinking necessary to understand and engage present forms of ubiquitous information processing.
As a personal reference, I recall that decades ago, when I was ten years old, I dreamed of listening to sound in space. In fact, this is what I wrote on the first page of my diary (1993). I also remember clearly when my teacher stole this dream from me, the moment she told me that because of a lack of matter, there is no sound in space. She concluded that the research I dreamed of was impossible, and my dream shattered.
Only years later, when I was introduced to micro waves and NASA’s use of sonification – the process of displaying any type of data or measurement as sound, I understood that with the right listening device, anything can be heard. From then on I started teaching my students not just about the electromagnetic spectrum, but also how they, through sonification and other transcoding techniques, could listen to rainbows and the weather. The technical challenges associated with this I call »a rheology of data«. Here, rheology is a term borrowed from the branch of physics that deals with the deformation and flow of matter.
The Argentinian writer Jorge Luis Borges’s 1941 short storey »The Library of Babel« is an amazing inspiration to me. In the short story, the author describes a universe in the form of a vast library, containing all the possible books following these simple rules: every book consist of 410 pages, each page displays 40 lines, has approximately 80 letters, features any combinations of 25 orthographic symbols; 22 letters, a period, a comma, and a space. While the exact number of books of the Library of Babel has been calculated, Borges describes the library as endless. I think Borges was aware of the morphing quality of the materiality of books and that, as a consequence, the library is full of books that read like nonsense.

The fascinating part of this story starts when Borges describes the behavior of visitors of the library. In particular, the Purifiers, who arbitrarily destroy books that do not follow their rules of language or decoding. The word arbitrarily is important here, because it references the fluidity of the library; the openness to different languages and other systems of interpretation. What I read here is a very clear, asynchronous metaphor for our contemporary approach to data: most people are only interested in information and dismiss (the value of) RAW, non formatted or unprocessed data (RAW data is of course an oxymoron, as data is never RAW but always a cultural object in itself). As a result the purifiers do not accept or realise that when something is illegible, it does not mean that it is just garbage. It can simply mean there is no key, or that the string of data – the book – is not run through the right program or read in the right language, that decodes its data into human legible information.

Besides the rheology of data, I worked with my students to imagine what syphoning could mean to our computational experience. Syphoning, a term borrowed from the open source Mac OS X plugin technology Syphon (by Tom Butterworth and Anton Marini) refers to certain applications sharing information, such as frames – full frame rate video or stills – with one another in real time. For instance, Syphon allows me to project my slides or video as textures on top of 3D objects (from Modul8 into Unity). This allowed me to, at least partially, escape otherwise flat, quadrilateral interfaces of (digital) images and video, and leak my content through the walls of applications.
In the field of computation – and especially in image processing – all renders follow quadrilateral, ecology dependent, standard solutions following tradeoffs (compromises) that deal with efficiency and functionality in the realms of storage, processing and transmission. However, what I am interested in is the creation of circles, pentagons, and other more organic manifolds! If this was possible, our computational machines would work entirely different; we could create modular or even syphoning relationships between text files, and as demonstrated in Chicago’s glitch artist Jon Satroms’ 2011 QTzrk installation, videos could have uneven corners, multiple timelines, and changing soundtracks.
Inspired by these ideas, I build Compress Process. This application makes it possible to navigate video inside 3D environments, where sound is triggered and pans when you navigate. Unfortunately, upon release, Wired magazine reviewed the experiment Compress Process as »a flopped video game«. Ironically, they could not imagine that in my demonstration video exists outside the confines of the traditional two-dimensional and flat interface; this other resolution – 3D – meant the video-work was re-categorized as a gaming application.
In a time when image processing technologies function as black boxes, we desperately need research, reflection and re-evaluation of these machines of obfuscation. However, institutions – schools and publications alike – appear to consider the same old settings over and over, without critically analyzing or deconstructing the programs of our newer media. As a result, there are no studies of alternative resolutions. Instead we only teach and learn to copy; simulate the behavior of the interface, replicate the information, paste the data.
A condition which reminds me of science fiction writer Philip K. Dick’s dystopian novel Pay for the Printer, in which printers print printers, a process that finally results in printers printing useless mush. If we do not approach our resolutions analytically, a next generation will likely be discombobulated by the loss of quality between subsequent copies or transcopies of data. As a result, they will turn into a partition of institutionalized programs producing monotonous junk.
Together these observations set up a pressing research agenda. To me it is important to ask how new resolutions can be created; and if every decontextualized materiality is immediately re-contextualized inside other, already existing, paradigms or interfaces? In The Interface Effect, NYU professor in new media Alexander Galloway writes that an interface is »not a thing, an interface is always an effect. It is always a process or translation« (Galloway 2013: p. 33). Does this mean that every time we deal with data we completely depend on our conditioning? Is it possible to escape the normative or habitual interpretation of our interfaces?
To establish a better understanding of our technologies, we need to acknowledge that the term »resolution« does not just refer to a numerical quantity or a measure of acutance. A resolution involves the result of a consolidation between interfaces, protocols, and materialities. Resolutions thus also entail a space of compromise between these different actors.
Think for instance about how different objects such as a lens, film, image sensor, and compression algorithm dispute over settings (frame rate, number of pixels, and so forth) following certain affordances (standards) – possible settings a technology has not just by itself but in connection or composition with other elements. Generally, settings within these conjunctions either ossify as requirements or de facto norms, or are notated as de jure – legally binding – standards by organizations such as the International Organization for Standardization (ISO). This process of resolving an image becomes less complex, but ultimately also less transparent and more black-boxed.
It’s not just institutions like ISO that program, encode, and regulate (standardize) the flows of data in and between our technologies, or that arrange the data in our machines following systems that underline efficiency or functionality. In fact data is often formatted to include all kinds of inefficiencies that the user is not conditioned or even supposed to see, think, or question. Our data is for instance often (re-)encoded and deformed by nepotist, sometimes covertly operating cartels for reasons like insidious data collection or locking the users into a proprietary software.
So while in the digital realm, the term »resolution« is often simplified to just mean a number signifying width and height of for instance a screen, the critical use I propose also considers the screens »depth.« And it’s within this depth that reflections on the technological procedure
In the 1950s and 1960s, the United States Air Force installed different versions of the 1951 USAF resolution test chart in the Mojave desert to calibrate aerial photography and video. Documentary film maker and writer Hito Steyerl’s video essay How Not to be Seen. A Fucking Didactic Educational .MOV File (2013) was filmed at one of these resolution targets just west of Cuddeback Lake. Steyerl presents viewers with an educational manual which via critical consideration of the resolutions and surveillance embedded in digital and analogue technologies argues that »whatever is not captured by resolution is invisible.«
Even though I am aware that her work by no means claims to be a comprehensive description of the term «resolution», or the ways in which these resolutions perform in the age of mass-surveillance, I believe the essay non the less misses a crucial perspective. Resolutions do not simply present things as visible, while rendering Others obscured or invisible. Rather, a resolution should also be understood as the choice between certain technological processes and materials that involve their own specific standard protocols and affordances, which in their turn inform the settings that govern a final capture. However, these settings and their inherent affordances – or the possibilities to choose other settings – have become more and more complex and obscure themselves as they exist inside black-boxed interfaces. Moreover, while resolutions compromise, obfuscate, or obscure certain visual outcomes, the processes of standardization and upgrade culture as a whole also compromise certain technological affordances – creating new ways of seeing or perceiving – altogether. And it is these alternative technologies of seeing, or obscured and deleted settings that also need to be considered as part of resolution studies.
The i.R.D. is dedicated to researching the interests of anti-utopic, obfuscated, lost, and unseen, or simply »too good to be implemented« resolutions. It functions as a stage for non-protocological radical digital materialisms. The i.R.D. is a place for the »otherwise« or »dysfunctional« to be empowered, or at least to be recognized. From inside the i.R.D., technical literacy is considered to be both a strength and a limitation: The i.R.D. shows resolutions beyond our usual field of view, and points to settings and interfaces we have learned to refuse to recognize.
While the i.R.D. call attention to media resolutions, it does not just aestheticize their formal qualities or denounce them as »evil,« as new media professors Andrew Goffey and Matthew Fuller did in Evil Media (2012). The i.R.D. could easily have become a Wunderkammer for artifacts that already exist within our current resolutions, exposing standards as Readymades in technological boîte-en-valises. Curiosity cabinets are special spaces, but in a way they are also dead; they celebrate objects behind glass, or safely stowed away inside an acrylic cube. I can imagine the man responsible for such a collection of technological artifacts. There he sits, in the corner, smoking a pipe, looking over his conquests.
This type of format would have turned the i.R.D. into a static capture of hopelessness; an accumulation that will not activate or change anything; a private, boutique collection of evil. An institute that intends to host disputes cannot get away with simply displaying objects of contention. Disputes involve discussions and debate. In other words: the objects need to be unmuted – or be given – a voice. A dilemma that informs some of my key questions: how can objects be displayed in an »active« way? How do you exhibit the »invisible?«
Genealogy (in terms of for instance upgrade culture) and ecology (the environment and the affordances the environment offers for the dynamic inter-relational processes of objects) play a big role in the construction of resolutions. This is why the i.R.D. hosts classic resolutions and their inherent (normally obfuscated) artifacts such as dots, lines, blocks, and wavelets, inside an »Ecology of Compression Complexities,« a study of compression artifacts and their qualities and ways of diversion, dispersion, and (alternative) functioning by employing tactics of »creative problem creation«, a type of tactic coined by Jon Satrom, during GLI.TC/H 2111, which shifts authorship back to the actors involved in the setting of a resolution.
Tacit:Blue (Rosa Menkman, video, 2015) small interruptions in an otherwise smooth blue video document a conversation between two cryptography technologies; a Masonic Pigpen or Freemasons cipher (a basic, archaic, and geometric simple substitution cipher) and the cryptographic technology DCT (Rosa Menkman, Discrete Cosine Transform encryption, 2015). The sound and light that make up the blue surface are generated by transcoding the same electric signals using different components; what you see is what you hear.
The technology responsible for the audiovisual piece is the NovaDrone (Pete Edwards/Casper Electronics, 2012), a small AV synthesizer designed by Casper Electronics. In essence, the NovaDrone is a noise machine with a flickering military RGB LED on top. The synthesizer is easy to play with; it offers three channels of sound and light (RGB) and the board has twelve potentiometers and ten switches to control the six oscillators routed through a 1/4-inch sound output, with which you can create densely textured drones, or in the case of Tacit:Blue, a rather monotonous, single AV color / frequency distortion.
The video images have been created using the more exciting functions of the NovaDrone. Placing the active camera of an iPhone against the LED on top of the NovaDrone, which turns the screen of the phone into a wild mess of disjointed colors, revealing the NovaDrone’s hidden second practical usage as a light synthesizer.
In this process the NovaDrone exploits the iPhone’s CMOS (Complimentary Metal-Oxide-Semiconductor) image sensor, a technology that is part of most commercial cameras, and is responsible for the transcoding of captured light into image data. When the camera function on the phone is activated, the CMOS moves down the sensor capturing pixel values one row at a time. However because the flicker frequency of the military RGB LED is changed by the user and higher than the writing speed of the phone’s CMOS, the iPhone camera is unable to synch up with the LED. What appears on the screen of the iPhone is an interpretation of its input, riddled with aliasing known as rolling shutter artifact; a resolution dispute between the CMOS and the RGB LED. Technology and its inherent resolutions are never neutral; every time a new way of seeing is created, a new prehistory is being written.
Myopia – consisted of a giant vinyl wall installation of 12 x 4 meters, plus extruding vectors – presenting a zoomed-in perspective of JPEG2000 wavelet compression artifacts. These artifacts were the aesthetic result of a »glitch« that occurred when I added a line of »another language« into the data of a high res JPEG2000 image – a compression standard used and developed for medical imaging, supporting zoom without block distortion.
The title Myopia hinted at a proposed solution for our collective suffering from technological hyperopia – the condition of farsightedness: being able to see things sharp only over a distance. With Myopia I build a place that disintegrated the architecture of zooming, and endowed the public with the »qualities« of being short-sighted. Myopiaoffered an abnormal view; a non-flat wall that presented the viewer a look into the compression – a new perspective. This was echoed in the conclusion of the installation, the day before the i.R.D. closed, when visitors were invited to bring an Exacto knife and to cut their own resolution of Myopia to mount them on any institution of choice (a book, computer or other rigid surface).
Resolutions are the determination of what is run, read, and seen, and what is not. In a way, resolutions form a lens of (p)reprogrammed »truths.« But their actions and the qualities have moved beyond a fold of our perspectives; and we have gradually become blind to the politics of these congealed and hardened compromises.
DCT (after Discrete Cosine Transform, an algorithm that forms the core of the JPEG compression) uses the 64 macroblocks that form the visual »alphabet« for any JPEG compressed image.

The premise of DCT is that the legibility of an encrypted message does not only depend on the complexity of the encryption algorithm, but equally on the placement of the message. DCT, a font that can be used on any TTF (TrueType Font) supporting device, applies both methods of cryptography and steganography; hidden in secret, the message is transcoded and embedded on the surface of the image where it looks like an artifact.
A second part of this third work was inspired by one of the by Trevor Paglen’s uncovered Symbology patches. It consists of a logo for the i.R.D., embroidered in a black on black patch, providing a key to decipher anything written in DCT: 010 0000 – 101 1111. These binary values also decipher the third and final work titled institutions: consisting of five statements written in manifesto style, printed in DCT, on acrylic.
When MOTI, the Museum Of The Image (in collaboration with the Institute of Network Cultures, that had previously contracted me to write my book on Resolution Studies, but then floundered), wrote out their first Crypto Design Challenge later that year (August 2015) I entered the competition with DCT. Delivered as an encrypted message against institutions and their backwards bureaucracy DCT finally won the shared first prize in the Crypto Design Challenge.

Another institutional success came in the winter of 2016. Six years after the creation of A Vernacular of File Formats (Rosa Menkman, 2010), I was invited to submit the work as part of a large-scale joint acquisition of Stedelijk Museum Amsterdam and MOTI.
A file format is an encoding system that organizes data according to a particular syntax. These organizations are commonly referred to as compression algorithms. A Vernacular of File Formats consists of one source image, a self-portrait showing my face, and an arrangement of recompressed and disturbed or de-calibrated iterations. By compressing the source image using different compression languages and subsequently implementing a same (or similar) error into each file, the normally invisible compression language presents itself on the surface of the image, resulting in a compilation of de-calibrated and unresolved self-portraits showcasing the aesthetic complexities of all the different file format languages.
After thorough conversation, both institutions agreed that the most interesting and best format for acquisition was the full digital archive, consisting of more than 16GB of data (661 files) This included the original and »glitched« – broken – and unstable image files, the Monglot software (Rosa Menkman and Johan Larsby, 2011), videos, and original PDFs. A copy of the whole compilation is now part of the archive of the Stedelijk Museum. The PDF of A Vernacular of File Formats remains freely downloadable and following the spirit of COPY < IT > RIGHT !, soon the research archive will be freely available online, inviting artists, students, and designers to use the files as source footage for their own work and research into compression artifacts.
By making my work available for re-use I intend to share the creative process and knowledge I have obtained throughout its productions. I adopt video artist and activist Phil Mortons statement: »First, it’s okay to copy! believe in the process of copying as much as you can (Phil Morton, Distribution Religion, 1973).« Generally, copying can be great practice; it can open up alternative possibilities, be a tactic to learn, and create access. Unfortunately copying can also be done in ways that are damaging or wrong, as has happened repeatedly with A Vernacular of File Formats. For instance when extracted images were used as application icons for the apps Glitch! for Android and Glitch Camerafor iPhone, or the many times my work was featured on commercially sold clothing, functioned as cover image on two different record sleeves, without permission, compensation or proper accreditation.
For Elevate Festival 2017 I was invited to talk about my experiences with losing authorship over images showing my face. Only during the preparations for this talk I realised that it is not that uncommon to lose (a sense of) authorship over an image (even if it is your face). An example of this is the long standing »tradition« within the professional field of image processing of co-opting (stealing) the image of a Caucasian female face for the production of color test cards. A practice that is responsible for the introduction of a racial bias into the standard settings for image processing. This revelation prompted me to start the research at the basis of the second part of the Diptych Beyond Resolution, titled Behind White Shadows (2017).

While a »one size fits all« or as a technician once wrote: »physics is physics« approach has become the standard, in reality, the various skin type complexions reflect light differently. This requires a composite interplay between the different settings involved when subject are captured. Despite the obvious need to factor in these different requirements for hues and complexions, certain technologies only implement the support for one – the Caucasian complexion – and compromise a resolution of Other complexions.

A much discussed example is the image of Lena Söderberg (in short »Lena«), featured as the Miss November 1972 Playboy centerfold and subsequently co-opted as test image during the implementation of DCT in the JPEG compression. The rights of use of the Lena image were never properly cleared or checked with Playboy. But the image, up until today, is the only image used to test and build the JPEG compression on. Scott Acton, editor of IEEE Transactions, writes in a critical piece: »We could be fine-tuning our algorithms, our approaches to this one image. […] They will do great on that one image, but will they do well on anything else? […] In 2016, demonstrating that something works on Lena isn’t really demonstrating that the technology works.«

To uncover and gain a better insight into the processes behind the biased protocols that make up standard settings, we are required to ask some fundamental questions; Who gets to decide the hegemonic conventions that resolve the image? Through what processes is this power legitimized and how does it elevated to a normative status? Moreover, who decides the principal point of view, and whose perspective is used in the operation of scanning or imaging technologies? All in all, who are casting these (Caucasian) »shadows«?

One way to make instances such as the habitual whiteness of color test cards more apparent is by insisting that these standard images, trapped in the histories of our technologies, become part of the public domain. These images need to lose their copyright along with their elusive power. The stories of standardization belong in high-school textbooks, and the possible violence associated with this process should be studied as part of any suitable curriculum. In order to illuminate the white shadows that govern the outcomes of our image processing technologies, it is required to first write these genealogies of standardization.
This type of research can easily become densely theoretical, something which is not bad in itself, but will result in the loss of audience. Which has let me to rethink and re-frame the output of my practice and to show it also as a series of »compression ethnographies;« videos, 3D environments, poems, and other experimental forms in which I anthropomorphize compressions and let them speak in their own languages. In doing so, I enable compressions to tell their own stories, about their conception and development, in the language of their own data organization.
An example is the Behind White Shadows centerpiece: DCT:SYPHONING. The 1000000th (64th) interval created and performed in VR is a fictional journey, told as a modern version of the 1884 Edwin Abbott Abbott novel Flatland. But in this case, toldthrough the historical progression of image compression complexities. In DCT:SYPHONING two DCT blocks, Senior and Junior lead us through a universe of abstract, simulated environments made from the materials of compression – evolving from early raster graphics to our contemporary state of CGI realism. At each level, this virtual world interferes with the formal properties of VR to create stunning and disorienting environments, throwing into question our preconceived notions of virtual reality.

As a third and final work, the exhibition Behind White Shadows also showed a four by three-meter Spomenik (monument) for resolutions that will never exist; a non-quadrilateral, extruding and video-mapped sculpture, that presents videos shot from within DCT:SYPHONING. Technically, the Spomenik, functions as an oddly shaped screen with mapped video, consisting of 3D vectors extruding in space.

Historically, a Spomenik is a piece of abstract, Brutalist, and monumental anti-fascist architecture from former Yugoslavia, commemorating or meaning »many different things to many people.« The Spomenik in Behind White Shadows is dedicated to resolutions that will never exist and »screen objects« (shards) that were never implemented, such as the non-quadrilateral screen. It commemorates the biased (white) genealogies of image and video compression. The installed shard is three meters high, creating an obscured compartment in the back of the Spomenik: a small room hiding a VR installation that runs DCT:SYPHONING, while the projection on the Spomenik features video footage from within the VR. In doing so, the Spomenik reflects literal light on the issues surrounding image processing technologies and addresses some of the hegemonic conventions that obscure our view continuously.






Resolution Dispute 0001 : Habit “Habit isn’t the same as instinct; habit is a learned action that becomes automatic. Crucially, habit is always something you learn from others, or in response to the environment. […] I understand habit as the scar of others within the self.”
- Chun, Wendy. Characters in a Drama called Big Data, in: Sonic Acts. Noise of Being Reader, 2017. p 114.





[ 0000 ]









Resolution Dispute 0000 : Habit
“Habit isn’t the same as instinct; habit is a learned action that becomes automatic. Crucially, habit is always something you learn from others, or in response to the environment. […] I understand habit as the scar of others within the self.”
- Chun, Wendy. Characters in a Drama called Big Data, in: Sonic Acts. Noise of Being Reader, 2017. p 114.
 















Following the ideal logic of transparent immediacy, technology is designed in such a way that the user will forget about the presence of the medium. Generally, technology aims to offer an uninterrupted flow of functionality and information. This concept of flow is not just a trait of the machine, but also a feature of society as a whole, writes DeLanda.1 DeLanda distinguishes between chaotic disconnected flows and stable flows of matter, that move in continuous variations, conveying singularities. DeLanda also references Deleuze and Guattari, who describe flow in terms of the beliefs and desires that both stimulate and maintain society.2 Deleuze and Guattari write that a flow is something that comes into existence over long periods of time. Within these periods, conventions, customs and individual habits are established, while deviations tend to become rare occurrences and are often (mis)understood as accidents (or in computation: glitches). Although the meaningfulness of every day life might in fact be disclosed within these rare occurances, their impact or relevance is often ruled out, because of social tendencies to emphasize the norm.

To move beyond resolution also means to move beyond the habitual. One way to do this is by creating noise, for instance in the form of glitch: a short lived fault or break from an expected flow of operation within a (digital) system. The glitch is a puzzling, difficult to define and enchanting noise artifact; it reveals itself as accident, chaos or laceration and gives a glimpse into normally obfuscated machine language. Rather than creating the illusion of a transparent, well-working interface to information, the glitch can impose both technological and perceptual challenges to habitual and ideological conventions. It shows the machine revealing itself. Suddenly, the computer appears unconventionally deep, in contrast to the more banal, predictable, surface-level behaviors of ‘normal’ machines and systems.

To really understand the complexity of the user’s perceptual experience it is important to focus on these rare occurances - to create an awareness of the users habits by use of, for instance, the accident.

︎︎︎︎︎︎

1. Manuel DeLanda, War in the Age of Intelligent Machines, New York: Zone Books, 1991. p. 20.
2. Gilles Deleuze and Pierre-Félix Guattari, A Thousand Plateaus: Capitalism and Schizophrenia, Trans. B. Massumi, Londen: The Athlone Press, 1988. p. 219.

The slides underneath are from the New Media class ‘Beyond Resolution’ which I thaught as substitute professor at the KHK (Kassel) in the Sommer Semester of 2018. During this week we unpacked the term ‘Habitual Use’ via a research into various layers of standardization. The slides are clickable; they either link to the work reference or zoom.







Resolution Dispute 0010 : Genealogy (vs. /his/tory)

[ 0001 ]


[ 0001 ]

Resolution Dispute 0001 : Materiality 
“‘Material witness’ is a legal term; it refers to someone who has knowledge pertinent to a criminal act or event that could be significant to the outcome of a trial. In my work, I poach the term ‘material witness’ to express the ways in which matter carries trace evidence of external events. But the material witness also performs a twofold operation; it is a double agent. The material witness does not only refer to the evidence of event but also the event of evidence.”
- Schuppli, Susan. Dark Matters: an interview with Susan Schuppli, in: Living Earth, 2016.
 







“Materiality is reconceptualized as the interplay between a text's physical characteristics and its signifying strategies, a move that entwines instantiation and signification at the outset. This definition opens the possibility of considering texts as embodied entities while still maintaining a central focus on interpretation. It makes materiality an emergent property, so that it cannot be specified in advance, as if it were a pre-given entity. Rather, materiality is open to debate and interpretation, ensuring that discussions about the text's "meaning" will also take into account its physical specificity as well.”
- Hayles, N. Katherine. "Print is flat, code is deep: The importance of media-specific analysis." Poetics Today 25.1 (2004): 67-90.

A reflexive approach to materiality makes it possible to re-conceptualize materiality itself as ‘the interplay between a text’s physical characteristics and its signifying strategies’. Rather than thinking in the mediums’ material as fixed in physicality, a re-definition of materiality is useful because it opens the possibility of considering any text as embodied entity “while still maintaining a central focus on interpretation. In this view of materiality, it is not merely an inert collection of physical properties but a dynamic quality that emerges from the interplay between the text as a physical artifact, its conceptual content, and the interpretive activities of readers and writers.”

Reflections on materiality should not just happen on a technological level. To fully understand a work, each level of materiality should be studied: the physical and technological artifact, its conceptual content, and the interpretive activities of reader, artist and audience. [the choice of any] digital material is not innocent or meaningless. With enough knowledge of the material, an investigation into digital materialilty can uncover stories about the origin and history of the material, by others.


▁∣∖▁╱◝◟.❘╱▔▔╲̸/╲╱▔▔▔╲∣∖╱▔╲▁▁∣∖▁╱◝◟.╱▔▔╲________


The slides underneath are from the course ‘Materiality’, which took place over three meetings during the New Media class ‘Beyond Resolution’ I thaught as substitute professor at the KHK (Kassel) in 2018. During these weeks we unpacked the term ‘materiality’ via a research into various file formats. The slides are clickable; they either link to the work reference or zoom.