Mark Jackson: Diagram Of The Fold, The Actuality of Virtual Architecture

Contemporary theoretical work in the discipline of architecture has a particular focus on aspects of digital technologies as these begin to redefine our notions of spatiality, habitation and form making. Theorists such as Stephen Perella and Greg Lynn have pointed to new thresholds for thinking the architectural, as cyber architectures begin to proliferate. In this context, the philosopher Gilles Deleuze is often invoked for his thinking of the virtual and the fold, while the theorists John Rajchman and Elizabeth Grosz have been instrumental in sustaining a dialogue between key architectural practitioners and a series of Deleuzian problematics. This paper critically examines some of the work of Perella, Lynn and other cyber architecture
theorists in the light of Deleuze’s own distinction between virtuality and actuality. A key Deleuzian term that allows for a critical analysis of virtuality and architecture is his notion of the ‘diagram.’ This term has been introduced to architectural discourse by Rajchman in the Any series of conferences, and has been taken up more recently in the context of the works of Peter Eisenman.

My aim is to introduce an ethical dimension to the issue of digital technologies and architectural practice by activating the ethical dimension in Deleuze’s own philosophical writings that circulate about his elaborations of virtuality, folding and the diagram.

The fundamental event of the modern age is the conquest of the world as picture

Martin Heidegger, “The Age of the World Picture” 1

It seems extraordinary at times that the techno-celebratory tone that accompanies so much writing on digital innovation floats on the most conservative, which is to say, idealist, of metaphysical thinking concerning questions of technology, instrumentalism, representation and human cognition. One discipline particularly marked by this is the field of architecture, affected in two significant ways by the advent of digital technologies. On the one hand, the most pragmatic and instrumentalist applications of computer-aided design software have “revolutionalised” the design office, eliminating the manual skills of hand drafting for the dexterity of keyboard and mouse. On the other hand, architectural theory has taken a delirious lunge towards the virtual. Yet, if we critically examine the fundamental grounds for the pragmatists or the radical epistemologists, we will find that same ground exposed, for example, by Heidegger in modernity’s adherence to the Cartesian subject, and the anthropocentrism inherent in representational frameworks. In short, that which goes by the name “cyberspace” or “virtual reality” tends to be construed on the most conservative of metaphysical principles. Heidegger’s seminal writings on the essence of technology and the image have an uncanny anticipation of the impact of cybernetic systems on the construal of our forms of knowing. In “The Age of the World Picture,” our Cartesian legacy of subject/object relations understood in
terms of “representation” and human calculability is countered by an invocation to another
modality of being-in-the-world, another sense of being human, that Heidegger nominates under the name of the incalculable:

This becoming incalculable remains the invisible shadow that is cast around all things everywhere when man has been transformed into subiectum and the world into picture. By means of this shadow the modern world extends itself out into a space withdrawn from representation, and so lends to the incalculable the
determinateness peculiar to it, as well as an historical uniqueness. This shadow, however, points to something else, which it is denied to us of today to know. 2

This theme of the incalculable has become something of a preoccupation for the contemporary philosopher Jean Francois Lyotard, though read via a Kantian legacy of the sublime. 3 In Kant’s aesthetic philosophy, the mathematically and dynamically sublime are precisely that which is presented in sense apprehension that exceeds the capacity of the Faculty of Understanding to bring about a determinant judgement, which is to say, to bring a sense impression into the order of the calculable. 4 Lyotard references this in his essay “Answer to the Question: What is the Postmodern?” in terms of a relationship established between the presentable and the conceivable.5 For Lyotard, modernity is marked by a particular relation between the conceivable and the presentable in as much as modernity is characterised by a projective futurity, or what Heidegger would call the throwness of being. This is the anxiety or uncanniness of the new. For Lyotard,
the conundrum of modernity is encapsulated in the notion of presenting the unpresentable. If there is indeed, ever, the event of the new, it would be the presentation of that for which there is no existing model, making present or presentable that for which there is no conceivable calculation. The new, as new, would present us with an inexpressibility, hence Lyotard’s turning to the Kantian sublime.

It is on this relation between the conceivable and the presentable that Lyotard draws a distinction between modernity and postmodernity. Modernity, in whatever age, is marked by a singular strategy when confronted by the anxiety of presenting the unpresentable. It has recourse to form, nostalgia for form, a transforming of formlessness into the calculability of good form, which is to say recourse to representational schemas and anthropocentric worldviews. On the other hand, postmodernity, in whatever age, is marked by something altogether different that Lyotard likens to the incalculability of the sublime, not the pleasure in good form, but something closer to the ec-static of a Heideggerian authentic being, which throws into question the centric of anthropocentrism, thereby revealing the possibility of a new horizon for questioning what it is to be human. One thinks here also of the formlessness celebrated by Georges Bataille, and the coincident rallying against architecture in his writings. 6

In the space provided for this paper, I am going to side-step a critical examination of instrumentalist applications of computer-aided design, and focus rather on the radical epistemologists of the virtual, who most often reference the writings of Gilles Deleuze as a touchstone. 7. In doing so, I want to firstly outline the position of major proponents of what is termed “hypersurface architecture,” and, secondly, provide some critique of this work. “Hypersurface architecture” is a term coined by Stephen Perella, a designer and academic attached to Columbia University’s School of Architecture.

‘Hyper’ implies human agency reconfigured by digital culture, and ‘surface’ is the enfolding of substances into differentiated topologies. The term hypersurface is not a concept that contains meaning, but an event; one with a material dimension. We are currently at the threshold of this new configuration as a site of emergence for new intensities of culture and intersubjectivity.8
Quoting the Japanese architect, Toyo Ito, Perella stresses that with the proliferation of new media forms and their influx into urban and architectural space, such space is becoming increasingly cinematic and fluid. Architecture is emerging as the construal of a new notion of body, one that imbricates and hybridises two contemporary, though dichotomous bodies. Firstly there are “our material bodies [that] are a primitive mechanism” and secondly, “another kind of body which consists of circulating electronic information.”9 Traditionally architecture has been seen as that which functions to house this ‘primitive’ body and, as a cultural practice, has been dominated by the question of the forms that such housing may take. Indeed, architecture has a long history of anthropomorphism, precisely as the convergence of the question of form as a response to the ideality of the body proper. Perella signals a radical shift in all of this. With respect to function, form no longer plays a game of correspondence. One may see this in any number of
contemporary design practices, many of which have not been absorbed in issues of hypersurface.10 And, traditionally, architecture as ornament or sign has been secondary to other determinants, particularly with respect to a transcendent and idealised notion of form. With the imbrication of new media technologies in architectural projects, there is the advent of ‘pixel’ architecture, as the manifestation of information space, whose topological materiality is surface. Hence Perella can say:

Pixel or media architecture has sought to bring the vitality of the electronic sign
into the surfaces of architecture, but in order to do this has negated or neutralised
form. 11

There is recognition here of the massive impact contemporary capitalist strategies of mediatisation have on everyday culture, and an effort to critically engage with them in design practice. Hypersurfaces are a reconfiguration of both the human subject and the world of objects, a rethinking of Cartesian space and phenomenological grounds for perception. 12 Or, as Perella says: “Hypersurface is fully intense when both surface/substance and signification play through each other in a temporal flux.”13 Hence, everything becomes surface. Computer aided design software allows for a shift in design emphasis away from volume-space considerations to those of activated surfaces, or what Perella refers to as ‘topological architecture,’ wherein containment or volume is the enacting of folding and refolding surface. Perella sees here the collision of two cultural arenas that are still a little separated. Avant-garde architectural practices, assisted by
digital design technologies are inventing a new architectural plasticity in activated surfaces, while advanced consumer culture has already absorbed the propulsion of the sign into the surface and the surface into the sign. Inevitably, those surface architectures currently being invented will be colonised by pixel architectures of pure commercialism: An influx of new digital technology interconnects with other transformations
taking place in global economic, social, and scientific practices cultivating fluid, continuous and responsive manifestations of architectural morphogenesis. 14 The outcomes of such architectures present an ongoing incommensurate relation between form and image, where surface is activated and motile and hence the perception of volume or containment is open, a ‘fluxus’ architecture. For Perella, such a notion of hypersurface as eventarchitecture necessarily needs to accommodate the incommensurability of Heideggerian
phenomenology and Deleuzian empiricism. Most certainly, though, it is Deleuze’s writing from his early co-authored Anti-Oedipus to his The Fold that establishes the philosophical ground for Perella’s thinking. 15 A key concern of Deleuze has been how to understand the question of knowing other than by way of the concept. In opposition to a philosophical tradition dominated by the Idea (Plato), the Cogito (Descartes) or the Faculties (Kant), Deleuze posits a materialist or empiricist philosophy whose legacy lies in Leibniz, Spinoza and Hume. In opposition to the relation between the possible and the real mediated by the concept, Deleuze poses the relation between the virtual and the actual, where the virtual is the complication of a multiplicity of differences implied in any actuality. Thinking, knowing, experiencing is the unfolding and
refolding of what is actual to reveal implied virtualites. In this sense, the substance that is the materiality of things and bodies is so much surface whose topological complication is the fold (le pli). 16 In the discipline of architecture contemporary practitioners and theorists have adopted some key Deleuzian phrases in discussing emerging design work. Perella is no exception. However, this notion of the fold has been adopted almost entirely at the level of form manipulation. That is to say, Deleuze is mobilised to justify some grounds for new forays into form making. We see this so clearly in Perella’s invocation to topological architecture, with its multiplication of surface differences, its disruption to classical notions of ideal forms. The ‘elite’ design practices referenced in publications such as Hypersurface Architecture or Architects in Cyberspace foreground this valorising of form, in the same moment that the critical register of this work is supposedly neutralising or nullifying form as form-image. 17
That is to say, from the perspective of Lyotard’s relations between the conceivable and the presentable, a certain ‘solace’ is found in the pleasure of form. From a Heideggerian position we can make some similar comments, particularly with regards to that complex ‘middle ground’ that is neither subject nor object, or that ‘third body’ that is neither primitive human body nor electronic informational body. This locale where surface and sign interpenetrate, where image/form/body become an incommensurate event, may be recouped in a less delirious discourse in terms of the gebidt of Heidegger’s “The Age of the World Picture:” the image-system.

Heidegger stresses that by ‘picture’ he does not simply mean image or representation. He suggests that the colloquial expression “we get the picture” is closer to what he means: “To get into the picture” [literally, to put oneself into the picture] with respect to something means to set up whatever is, itself, in place before oneself just in the way that it stands with it, and to have it fixedly before oneself as set up in this way.

… “We get the picture” concerning something does not mean only that what is, is set before us, is represented to us, in general, but that what stands before us—in all that belongs to it and all that stands together in it—as a system. 18

This would necessarily have to be read in conjunction with Heidegger’s writing on the essence of technology, on the gestell, or the enframing of standing reserve: “the challenging claim that gathers man to order the self-revealing as standing reserve.” 19. The claims being made by Perella, in the invocation of radical philosophy, are too swift for the degree of resolution he brings to his thinking concerning those philosophical works. More to the point, the nexus he establishes between the topological architectures of a design avant-garde and the pixel architectures of a multimedia consumer industry are quite easily recouped within much more conventional frameworks. Three such frameworks would be firstly, those of representational schemas (contemporary architecture expressive of contemporary schizoculture). Secondly, there would be formal analysis (one looks for the diagram of the fold as one would have once looked
for the Beaux-Arts plan), and, thirdly, anthropomorphism (there is little disturbed in the mechanism of mimesis going on, only in the form within which one thinks the body). At stake here is what Perella might refer to as the misreading of intensity for extension. As event, hypersurfaces are localised and singular sites of intensity, of body coupling with thing, rather than sites that are calculable in the Cartesian sense of locating things in space. But Perella does not say enough here about intensive spaces, even whether or not they can be prefigured in design process. 20

The work of Marcos Novak occupies a similar terrain though it has a different moment of implication. Where Perella has focused on transformation to conventional architectural objects, and the subject positions and temporalities accorded to topological approaches, Novak, with his notion of liquid architectures, extends the terrain of applicability of architectural practices. Architecture is now information space. Cyberspace is architecture, it has architecture and it contains architecture. Cyberspace places humans within information space and hence is an architectural project. Novak stresses its poetic dimensions: “the greater task will not be to impose science on poetry, but to restore poetry to science.” 21

The philosopher Elizabeth Grosz has written directly on the architectural projects of proponents like Perella and Novak. In “Cyberspace, virtuality and the real: some architectural reflections,” she poses a series of dilemmas. 22 As the author of Volatile Bodies, Grosz has particular focus on those practices that continue to maintain a fundamental dualism concerning mind-body relations. 23 And for all their protestation to the contrary, she sees proponents of virtual reality and cyberspace as ultimately maintaining this division: “the real is not so much divested of its status as reality as converted into a different order in which mind/will/desire are the ruling terms: a real in which there is a ruling term and whose matter, whose “real,” is stripped away. She takes Novak to task for his statement: “cyberspace stands to thought as flight stands to crawling.” This implies cyberspace as a mode of transcendence, comparing real space of immanence (bodily action) to its transcendence, with all of the hierarchical privilege accorded mind in Western
thought. 24

In this respect it is useful to look at the published work of Greg Lynn, a contemporary designer who has carefully articulated the critical epistemological breaks his work is making with respect to orthodox notions of design. 25 Unlike Perella or Novak, Lynn focuses on computer visualisations as a design tool that affords a radical break with conventional notions of design practice. Lynn himself begins to articulate his practice by posing three possible models by which we approach the making of architecture. The first he delineates in terms of exact measure that produces geometry reducible to a closed and fixed order. It allows for the repetition of forms and the establishing of fixed identity. He characterises the legacy of Classicism in this model, and all design practices that recognise architecture as static and unchanging form. The second model her refers to as anexact. In this mode of design practice, forms are open and evolve their order within
a field of gradient influences. These gradient influences are variable and lead to practices of deformation. Due to a capacity to measure the range of forces that lead to form manipulations, anexact practices are rigorous yet irreducible to the repetition of identity. They are, rather, singularities. The third model he terms inexact. It is characterised by a formlessness brought about by diffused fields without contour or structure. This leads to nondescript identity. If the first model is recognised by a legacy of Classicism, the third would be characterised by something like the formless in the work of Georges Bataille, the amorphousness of ecstatic being.

Lynn is adamant that architecture needs geometry, which is to say a mode of embedded structuring. He is also adamant that architecture needs to be responsive to complex gradient forces in the realisation of its formation, and hence cannot continue to pursue a fascination with static and closed forms as the determinants of composition.

In this respect, Lynn has developed his notion of blobs. 26 Blobs are not amorphous body types without a ‘descript’ form or geometry preceeding their building. That is, they are not a leaning to a Bataillian excess, but rather a proportionally closed body that is organisationally open. This is fundamentally a move from the repetition of the identical to the differential, from repeating identities to unfolding processes of becoming. It is what Lynn calls, after Deleuze and Foucault, a diagram of the body that is indeterminant without being simply amorphous. 27 As diagram, it is the presentation of the mutable forces that act on assemblage, fusion, mutation, evolution and fluidity. If the technology of perspective opened the horizon for the construction of exact architectures, Lynn has it that the computer opens the horizon for anexact architectures. The computational processes to work with the mutability of gradient fields in the
formation/deformation of building is entirely beyond the capacity of even the most mathematically inclined architect. The computer and its visualisation processes enable the designer to work with immediacy and transparency at the level of anexact design. As Lynn puts it, computer aided design introduces topology wherein surfaces and lines are topological entities based on splines and curvilinear lines, allowing for surfaces capable of deformation by a variety of forces. One is literally designing by the influence of shaping forces.

Three key aspects of this are flexible surfaces, measurable constraints of parameters in performance envelopes that enclose but do not exactly define limits, and crucially, evolution in time. This third aspect, time, is emphasised by Lynn. Animate forms are forms in motion, implying evolution of a form with its shaping forces. Lynn traces a genealogy of the emergence of temporality of animate form through the work of Etienne-Jules Marey, Rene Thom and Francis Galton, and the work of Hans Jenny in the 1950s and 60s. Lynn develops two mutable paradigm models, that of skeletons or inverse kinematics, and that of blobs or isomorphic polysurfaces.

The form and organisation of any given configuration is intensive as it depends on subtle variations of exigent conditions, internal limits and adjacencies of relations. Neither skeleton nor blob can be reduced to topological essence, or ideal form. The animate form is both a singularity and a multiplicity. From the perspective of unified surface, it is a singularity as it is contiguous but not reducible to a single order or essence. From the perspective of constituted components, it is a multiplicity, composed of disparate components put into a complex relation. Hence, assemblages are singular and continuous while multiple and discontinuous, leading to an organic wildness and monstrosity. Exact and inexact are two ideal catagories, that of plenitude of order and plenitude of difference. The anexact, as singularity is not reducible to an ideal order, and hence multiplies difference, but yet remains measurable. 28

While the work of Lynn genuinely opens a less delirious space of the virtual, and one that seems to offer significant interventions in design processes, there are two critical registers that need to be brought to this work. The first concerns the manner whereby Lynn references Foucault and Deleuze, particularly in the opening essay of his book, Animate Form. One would not want to say that Lynn misses the point being made by Deleuze or Foucault, regarding the notion of the diagram; he most certainly does understand its significance. Rather, Lynn lets it slip by. What do I mean? We necessarily need to turn to how Deleuze uses the notion of “diagram.” When Deleuze is writing on Michel Foucault’s “panopticism,” he references this as a “diagram of power.” 29 For Foucault, panopticism, as an analytics of a certain reading of modernity, is derived from an architectural figure, Jeremy Bentham’s Panopticon, the design of a prison. 30 Many commentators have been critical of Foucault for using such a limited figure in describing such an
abstract and diffuse notion as the modern forms that power takes in productive and coercive mechanisms of control. There never was built a pure Panopticon and modernity’s spaces are infinitely more variable. Deleuze, in discussing the notion of a diagram of power, points out a serious misreading of Foucault here. Foucault speaks of our forms of knowing being produced by relations of power, or force against force. “Form” here has two meanings—the organisation of matter into visibilities and the finalisation of functions into statements. Knowledge is composed of these two heterogeneous forms, statements and visibilities. Relations of power, however, work with unformed, unorganised matter, and unformed, unfinalised functions. It is this informal dimension that Deleuze and Foucault designate by ‘diagram.’ The concrete assemblage of strata of knowing (statements and visibilities) are effects that realise something because relations between forces, power relations are virtual, potential and unstable. The diagram of relations
between forces is a non-unifying immanent cause coextensive with the whole social field:

It is precisely because the immanent cause, in both its matter and its function, disregards form that it is realised on the basis of a central differentiation, which, on the one hand, will form visible matter, on the other will formalise articulable functions. 31

When Deleuze discusses the fold as that which mediates between virtuality and actuality, he uses the term as a diagram, in the sense mentioned above. It is not the organisation of matter into some visible form, nor the finalisation of matter into function. Rather it is the virtual relations of force that destabilise the determinable and the articulable into the new. And, just as commentators of Foucault began to look for little Panopticon prison designs in every disciplinary space of modernity, so designers who have read a little Deleuze start making buildings with little folding surfaces. Such practices fail to recognise the distinct ontologies of form and force that are crucial for Deleuzian (and Foucauldian) analyses. Lynn does not read it quite like this. He uses an example found in Deleuze’s book on Foucault, that of the sequence of letters Q,W,E,R,T,Y found on a keyboard. Deleuze, after Foucault, refers to this as a statement, a certain form of
knowing. This example is doubly complicated as it is non-discursive. In The Archaeology of Knowledge, Foucault defines the statement as, literally, the “it is said,” and draws a distinction between the discursive and non-discursive. 32 This distinction will later be read by Deleuze as the heterogeneity between statements and visibilities, each of which have their distinctive forms of knowing and forces of composition. While Lynn correctly differentiates statements from linguistic constructions such as propositions, he clearly implies that statements operate at the level of the virtual: “the distribution of its letters on keys in space is a virtual diagram, or an abstract machine.” This is simply confusing what for Deleuze are different ontological levels. The distribution of letters on the keyboard is not a relation of force as immanent cause. The implication here is that while Lynn is designing processually with forces, they are simply not what Deleuze or Foucault meant by force. Foucault engages these issues in Discipline and Punish, and he is talking about relations of power that are productive of subjects and our forms of
knowing. Lynn translates the entire scene to a mathematisation of nature at the level of form animation. And in his discussion on Foucault, Deleuze references not two but three ontologies, those of strata of knowing (visibilities and statements), those of diagrams of power (virtualites) and those of the self. 33 This is not the self as a form of knowing productive of relations of power, but the self Foucault engages with in his Care of the Self, under the notion of an aesthetics of the self, where the self is a political substance that Foucault named resistance. It is force becoming self-action, an auto-affection that thinks otherwise, or thinks the unthought — what has not been thought. As we can see, in the truncation of his reading, or in its selectivity, Lynn folds too soon, to use an expression from Deleuze, and limits his engagement with the virtual to force as measure.

It is instructive to take note of some of the comments on cyberspace and virtual reality made by the cultural theorist and psychoanalyist, Slavoj Zizek. 34 In what is a complex reading, quite dependent on the work of Jacques Lacan, Zizek locates two points crucial for developing critiques of practitioners such as Perella and Novak on the one hand, and the work of Lynn on the other. Perella and Novak focus on a bodily immersion constitutive of hyperspatiality, that alerts us to the dominant discourses on cyber-experience, associated with opening new dimensions to identity. Lynn focuses on the software as tool that allows for the invention of new problems. The dynamics are working quite differently with each. Zizek is working with the Lacanian distinction of the symbolic, imaginary and real. Reality is constituted through symbolic
(immersion in language) and imaginary (immersion in specular identification) relations. The real is the site of disgust/pleasure that interrupts symbolic-imaginary construction. It is the pathalogical constitutive of the normal. 35 Zizek stresses that the symbolic as that which allows mastery of reality is already virtual, while the imaginary is constitutive of a real self as image.

Virtual reality or cyberspace does not pose a threat to reality; in fact it constitutes a surfeit or plenitude of reality; rather it severs relations to the real, which is to say, to the jouissance or pleasure/disgust of decentered selves. The multiplicity of identities discussed with respect to cyberspace plays out an undecidability in oscillations between imaginary and symbolic identifications. But this is precisely not the site of pleasure and disgust constitutive of the play of identity. In fact, the locus of the split constituting the horizon of identity play is rather the irreconcilability of the empty band that is the subject and the identities it may assume. The locus of the empty band is the real. And it is this that is always in deficit with the language/image saturations of cyberspace. Hence, any radicality positioned by Perella or Novak for the liquidity of spatialised selves ultimately does not reside in screen or virtual spaces.

If body-immersion truncates the real, Lynn points to another phenomenon, that of identification with the programme as ego. For Lynn, the computer-aided design software is an agent that acts as a stand-in, literally doing what we cannot do, performing a series of specific functions. Such a programme that acts as my stand-in provides an illustration of the Lacanian concept of ego, as opposed to the subject: a cyberspace subject is not another subject but simply the subject’s ego — a supplement as a kind of alter-ego. 36 Hence the subject has a relation to it of acceptance through disavowal: one knows it is a program, not a living person but for that reason one can treat it as a caretaker/partner. Hence our radical decentrement, as these agents mediatise us.

Returning to the Deleuzian problematic of the virtual and the actual, it is necessary to pinpoint how much of architectural discourse and practice misconstrues its engagement here, so as to conventionalise what might otherwise be a radical approach to a new ontology of spatiality. It is instructive to gauge the comments by a cultural philosopher who has addressed this Deleuzian material in the context of architectural practice and digital technologies. I am thinking here in particular of some of the writings of Brian Massumi 37 One might also consider the major contributions by John Rajchman and the design practitioner, Bernard Cache, already cited, as well as the work of Elizabeth Grosz. In “Sensing the Virtual, Building the Sensible,” Massumi moves more cautiously than does Perella in discussing the notion of topology in architecture, and the relation of computer-aided design to a Deleuzian notion of the virtual. Architectural practices have always necessarily negotiated moving from the abstractness of prefiguration in
design procedures to the concrete real of still-standing forms. However, design practices have tended to work with an abstraction defined by the concept of the calculable. Massumi’s example here is Le Corbusier who he quotes: “To conceive, it is first necessary to know what one wishes to do and specify the proposed goal.” 38 While Deleuze’s ‘virtual’ is abstract, it is not the abstraction of the possible by way of the concept. Rather it is the potential for the new in what is actual; hence actuality is understood as a becoming otherwise. Topology is a response in architectural practice for negotiating how one moves from virtuality to actuality. To quote Massumi:

Topology deals with continuity of transformation. It engulfs forms in their own variation. The variation is bounded by static forms that stand as its beginning and its end, and it can be stopped at any point to yield other still-standing forms. But it is what happens in-between that is the special province of topology. The variation of seamlessly interlinking forms takes precedence over their separation. … When the focus shifts to continuity of variation, still-standing form appears as residue of a process of change, from which it stands out (in its stoppage). … The variation, as enveloped past and future in ceasing form, is the virtuality of that form’s appearance (and of others with which it is deformationally linked.) 39

Crucial for Massumi, is the difference between abstraction as prefiguration of what is already in the mind’s eye, or the assembly of novel combinations from pre-existing forms, and abstraction as an active engagement with an indeterminacy, or incalculability, via what he terms “virtual forces of deformation.” The computer is not used as a device to image what is to be built but is rather a tool to “catalyse newness and emergence.” 40 The key notion here is ‘force.’ It is not the imageability of forms of deformation that is at stake, but the activating of forces of deformation. It is for this reason that Massumi is critical of those working in digital technologies and architecture that define this imbrication at the level of the window. That is to say, designers have tended to recognise relations between digital technologies and architecture in terms of designed hypersurfaces as events embedding the multimedia dexterity available on the computer screen.

It is crucial not to confuse the complexity of technologies of multimedia for virtualisation. The virtual is not the content or even the ‘infosphere.’ Nor is it the technological connectivity itself. What is it, then? Massumi gives two models, one he terms “windowing,” the other “tunnelling:” Windowing provides a framed and tamed static perspective from one local space onto another that remains structurally distinct from it. The connection established is predominantly visual, or audio-visual. Features from or of, one locale are
‘delivered’ into another as information, prepackaged for local understanding and use. Windowing is communicational. 41

We may understand this model as the predominant one of Internet technologies and thearchitectures of virtual spacing, where ‘virtuality’ resides at the level of the determined forms and determining functions of content. The imbrication of pixel architectures and topological surfaces suggested by Perella, have tended to be construed in one form or another according to this model. Tunnelling, on the other hand activates virtuality at the level of the Deleuzian diagram, and is concerned not with the communicability of good form from one locale to another as data pre-packaging, but with perception itself, “presenting perceptions originating at a distance.”

The perceptual cut-ins irrupt locally, producing a fusional tension between the close at hand and the far removed. As the distance cuts in, the local folds out. This two-way dynamic produces interference, which tends to express itself synaesthetically, as the body returns vision and hearing to tactility in an attempt to
register and respond to a structural indeterminacy. 42

Crucial for Massumi, this process is not concerned with producing the new, as in a new thing or an invention. Rather, this opening is onto newness: the reality of transition, the being of the new, quite apart from anything new.” 43. And it is in this sense that digital technologies may be conceived of as virtual, in the sense of being activating forces for the emergence of unformed matter and unfinalised functions that are constitutive of the event of the new. Hence, the stakes for the discipline of architecture are not in the presentation of new forms responding to the technological innovation or imperatives of digital technologies. Rather, they are in the activating of the virtualities in what is already actual, the horizons of which are revealed in the capacities of digitised technologies to confound the near and the far as the non-local. As Massumi suggests:

“When the communicational medium ceases to be transparent and perforce stands out in its materiality, information blends into perception. Information then precedes its understanding: it is experienced as a dimension of the confound before being understood and used and perhaps lending itself to invention.” 44

If we ask how to conceive of the relation between digitisation and knowledge, it would be a mistake to figure digitisation as one of the many forms that knowledge may take, as if it is an instrument that gives shape. Rather, digitisation is a technology of power and necessarily needs to be conceived of ontogenetically in terms of a diagram in Deleuze’s or Foucault’s sense, as something akin to panopticism or the fold, the activating of a force at the level of unformed matter and non-finalised functions, producing formed matter as visibilities and articulable functions as statements. In a similar way we may begin to recognise architecture, abstractly, as a diagram of power, or technology of power, and it is at this level, of regimes of power productive of our forms of knowing that the imbrication of digitisation and architecture may be recognised.
The question would be then, how does one now recognise panopticism, as the spatialising of disciplined bodies, in the light of regimes of truth productive of the non-local and unstratified spacings of digital technologies. 45

Without reducing this question entirely to a Heideggerian formulation, certainly the question of technologies of power and diagrams of power circulate around the issue of enframing, exposed by Heidegger in “The Question Concerning Technology.” Equally, the confound of the “nonlocal” and the being of the new, as discussed by Massumi, alerts us to a broader problematic of the uncalculable we find alluded to by Heidegger and developed more fully in Lyotard’s work on the sublime.

Notes

1. Martin Heidegger (1977) “Age of the World Picture” in The Question Concerning Technology and Other Essays, trans. William Lovitt, Harper Torchbooks, New York, p. 134.
2. Ibid., pp. 135-136
3 . See Jean-Francois Lyotard (1994) Lessons on the Analytic of the Sublime, trans. Elizabeth Rottenberg, Stanford University Press, Stanford, Calif.
4. Immanuel Kant (1986) The Critique of Judgement, trans. James Creed Meredith, The Clarendon Press, Oxford, especially pp. 90-203.
5 . Jean-Francois Lyotard (1992) The Postmodern Explained, trans. Julian Pefanis and Morgan Thomas, University of Minnesota Press, Minneapolis.
6. See Denis Hollier (1989) Against Architecture: The Writings of Georges Bataille trans. Betsy Wing, The MIT Press, Cambridge, Mass.
7. For some contemporary critiques of the instrumentalism inherent in computer-aided design practices, see, for example, Alberto Perez-Gomes (1989), Architecture and the Crisis of Modern Science, The MIT Press, Cambridge, Mass.
8 . See Stephen Perella (1998) “Hypersurface Theory: Architecture >< Culture” in Hypersurface Architecture, ed. Maggie Toy, Academy Editions, London, p. 10.
9. Ibid.
10. It is not without coincidence that Perella teaches in the architecture school headed by Bernard Tschumi. It is Tschumi’s Architecture and Disjunction that, perhaps, pushes the furthest this dislocation of the modernist cornerstone of form and function. One may also see this in the design work of Peter Eisenman, as well as the range of practitioners referenced in Hypersurface Architecture. See Bernard Tschumi (1994)
Architecture and Disjunction The MIT Press, Cambridge, Mass.
11. Perella (1998) “Hypersurface Theory: Architecture >< Culture” p. 10.
12. Perella references here the work of Bernard Cache and his theorisation of ‘subjectiles’ and ‘projectiles.’ See Bernard Cache (1995) Earth Moves trans. Anne Boyman, The MIT Press, Cambridge, Mass.
13. Perella (1998) “Hypersurface Theory: Architecture >< Culture” p. 10.
14. Ibid.
15. Gilles Deleuze and Felix Guattari (1977) Anti-Oedipus: Capitalism and Schizophrenia Viking, New York; Gilles Deleuze (1993) The Fold: Leibniz and the Baroque trans. Tom Conley, University of Minnesota press, Minneapolis.
16. For a discussion of some applications of Deleuze’s work in architecture, see John Rajchman (1998) Constructions The MIT Press, Cambridge, Mass.
17. See Architects in Cyberspace II (1998), Academy Editions, London.
18. Martin Heidegger (1977) “Age of the World Picture” p.129.
19. Martin Heidegger (1977) “The Question Concerning Technology” in The Question Concerning Technology and Other Essays, trans. William Lovitt, Harper Torchbooks, New York, p. 19.
20. More pertinent to a thinking of hypersurface as intensive site would be the early
work of Lyotard on libidinal economics, which attempts to conceive of both the
human body and the body of Capital as bands or surfaces of coupling. See Jean
Francois Lyotard (1993) Libidinal Economy trans. Iain Hamilton Grant, Indiana
University Press, Bloomington. This work is aligned with though not coincident
with the writings of Deleuze and Guattari: In a review of Lyotard’s work, Alphonso
Lingis notes: Where does the erotogenic zone start and where does it stop? Where
do organisms start and where do they stop? For ultimately it is the same processes
that take pleasure in constituting systems and organic totalities that are at work in
thought, and in the organisation of the body politic—at least that which, like that of
the young Marx, depend on the idea of society as an organic totality. And the intensities of the primary processes are excitations at the conjuncture of one’s own surfaces with one another, of one’s surfaces with those of another, of one’s surfaces with those of the physical world. There is a libidinous economy at work in the very circulation of goods and services which constitute the political economy of
capitalism. See Alphonso Lingis (1980) “Interpretation of the Libido,” Sub-Stance,
vol. VIII, no. 4, p. 94.
21. See Marcos Novak (1992) “Liquid Architectures in Cyberspace,” in Cyberspace: First Steps. Ed. Michael Benedikt, The MIT Press, Cambridge, Mass, pp. 225-254.
22. Elizabeth Grosz (1997) “Cyberspace, Virtuality, and the Real: Some Architectural Reflections,” in Anybody. Ed. Cynthia Davidson, The MIT Press, Cambridge, Mass, pp. 109-117. See also the more recent publication by Grosz (2001) Architecture from the Outside: Essays on Virtual and Real Space., The MIT Press, Cambridge, Mass.
23. Elizabeth Grosz (1994) Volatile Bodies: Towards a Corporeal Feminism., Indiana University Press, Bloomington.
24. Grosz, “Cyberspace, Virtuality, and the Real,” p. 114.
25. The major publication to consider is Greg Lynn (1999) Animate Form. Princeton Architectural Press, Princeton N.J., particularly the introductory essay by Lynn, pp. 9-43. See also Lynn (1997) “From Body to Blob,” in Anybody. Ed. Cynthia Davidson, The MIT Press, Cambridge, Mass, pp. 162-173; and Anthony Vidler (2000) “Skin and Bones: Folded Forms from Leibniz to Lynn,” in idem, Warped Space: Art, Architecture and Anxiety in Modern Culture. The MIT Press, Cambridge, Mass.
26. Much of the graphic material published in Lynn’s Animate Form may be found on his web site: www.basilisk.com/aspace/form.html
27. Lynn, Animate Form. The Deleuzian reading of the diagram is implicit throughout the text, but made most explicit p. 39 ff.
28. See Lynn, “From Body to Blob,” p. 169.
29. See Gilles Deleuze (1986) Foucault trans. Sean Bove, University of Minnesota Press, Minneapolis.
30. See, for example, Michel Foucault (1978) “The Eye of Power,” trans. Mark Seem, in Semiotext(e) vol. 3, no. 2, pp. 6-19.
31. Gilles Deleuze (1986) Foucault, p. 38.
32. Michel Foucault (1972) The Archaeology of Knowledge. Trans. A.M. Sheridan Smith, Harper Torchbooks, New York. See especially “Defining the Statement,” pp. 79-88.
33. Gilles Deleuze (1986) Foucault, p. 119.
34. Slavoj Zizek (1997) “Cyberspace, Or, The Unbearable Closure of Being,” in idem, The Plague of Fantasies. Verso, London, pp. 127-167.
35. Slavoj Zizek (1997) “Cyberspace, Or, The Unbearable Closure of Being,” especially pp. 137-140.
36. Slavoj Zizek (1997) “Cyberspace, Or, The Unbearable Closure of Being,” especially pp. 140-143.
37. See Brian Massumi (1998) “Sensing the Virtual, Building the Insensible” in Hypersurface Architecture, ed. Maggie Toy, Academy Editions, London, p. 10; and Elizabeth Grosz (1998) “The Future of Space: Towards an Architecture of Invention,” in Anyhow ed. Cynthia C. Davidson, The MIT Press, Cambridge, Mass.
38. Le Corbusier and Amedee Ozenfant, (1920) “Purism” in Modern Artists on Art, ed. R.L. Herbert, Prentice-Hall, Englewood Cliffs, N.J., p. 62. Quoted in Brian Massumi (1998) “Sensing the Virtual, Building the Insensible” p. 16.
39. Brian Massumi (1998) “Sensing the Virtual, Building the Insensible” p. 16.
40. Ibid., p. 17.
41. Brian Massumi (1998) “Sensing the Virtual, Building the Insensible” p. 23
42. Ibid.
43. Ibid.
44. Ibid., p. 24.
45. It is, perhaps, the work of Giorgio Agamben that most develops this double issue of panopticism and the non-local. See, for example, Giorgio Agamben (1998) ˆHomo Sacer: Sovereign Power and Bare Life, trans. Daniel Heller-Roazen, Stanford University Press, Stanford, Calif; and Mark Jackson, “The Architecture of Exceptional Places,” Proceeding of the conference Habitus 2000: A Sense of Place, Curtin University of Technology, Perth.