Abstract
This paper aims to highlight the life of computer technologies to understand what kind of ‘technological intentionality’ is present in computers based upon the phenomenological elements constituting the objects in general. Such a study can better explain the effects of new digital technologies on our society and highlight the role of digital technologies by focusing on their activities. Even if Husserlian phenomenology rarely talks about technologies, some of its aspects can be used to address the actions performed by the digital technologies by focusing on the objects’ inner ‘life’ thanks to the analysis of passive synthesis and phenomenological horizons in the objects. These elements can be used in computer technologies to show how digital objects are ‘alive.’ This paper focuses on programs developed through high-order languages like C++ and unsupervised learning techniques like ‘Generative Adversarial Model.’ The phenomenological analysis reveals the computer’s autonomy within the programming stages. At the same time, the conceptual inquiry into the digital system’s learning ability shows the alive and changeable nature of the technological object itself.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
This paper aims to highlight the life of computer technologies to understand what kind of ‘technological intentionality’ is present in computers based upon the phenomenological elements constituting the objects in general.
The notion of technological intentionality has already been introduced by several influential schools of thought in the contemporary philosophy of technology. In Information and Computer Ethics, the idea of technological intentionality stands at the center of the debates about computer systems’ agency, and it tightly relates to the idea of ‘computer intentionality’ where this concept stands for the computer system’s capacity to behave in a certain way in response to input for reaching a specific goal (Berkich, 2017, 2018; Johnson, 2006; Johnson & Miller, 2008; Miller et al., 2017). This notion can also be found in a slightly transformed shape in the Latourian ‘technological detour’, where it represents a unique mode of technical activity through which the user’s plan of action is modified through the object (Conty, 2013; Latour & Venn, 2002; Gallit Wellner, 2020a, 2020b). Postphenomenology as well clearly uses this term as a critical component in ‘human-technology’ relations within a phenomenological framework (Ihde, 1990; Verbeek, 2005, 2008a, 2015).Footnote 1 However, postphenomenology does not focus on the phenomenological details underlying this concept since only a few authors analyze the Husserlian phenomenological texts where the idea of ‘things themselves’ and ‘intentionality’ have been introduced (Husserl, 1966; Liberati, 2016b; Mykhailov, 2020; Overgaard, 2004; Steinbock, 1997; Willis, 2001). Thus, even if many approaches are tackling the activity of the objects in terms of ‘technological intentionality,’ there are still many elements related to the phenomenological analysis that are absent from the current discussion and directly structure what the technological intentionality is. By focusing on these phenomenological elements, it is possible to make visible the inner life of the objects and to show how the technologies have their ‘intentionality.’ For this reason, the phenomenological perspective is important because it can change the perception of what computers are and their effects in our surroundings by turning computers from mere dead entities to almost alive things with intentionality.
This paper is structured into two main sections. In the first section, we highlight different features the technological intentionality has by analyzing the Husserlian texts in relation to the objects. In the second section, we apply the key findings of the first section to computer technologies in order to show the implications of this perspective in the analysis related to the actual use of digital devices. More specifically, we analyze the programming process with C++ and provide a phenomenological analysis of Generative Adversarial Model in unsupervised learning.
2 Addressing Technological Intentionality Within Phenomenology
The notion of ‘intentionality’ is a complex concept in phenomenology, which touches on many different elements and traditions. Husserl directly relates his use of “intentionality” to the work of Franz Brentano. Brentano introduced the notion of intentionality at the end of the nineteenth century (Brentano, 1973), and this notion has been widely used today to refer to the many contemporary philosophical issues in epistemology (Schmid, 2012), philosophy of mindFootnote 2 (Chalmers, 1996; Crane, 2010; D. Dennett, 1987; Searle, 1983), philosophy of language (Kaplan, 1978; Loewer, 1987; Lohmar, 2012), metaphysics (Zalta, 1991), ethics (Waldenfels, 2012), and cognitive sciences (Dretske, 1995; Fodor, 2003). For Brentano, intentionality is a significant feature of various mental states (hopes, beliefs, judgments), which grounds the structure of the mind itself (Jacob, 2019). At the same time, intentionality, as Brentano famously notes in the Appendix to the 1911 edition of his Psychology from an Empirical Standpoint, represents a relational nature of mental activities (1973). As a result, intentionality appears to be an invariant ideal component of experience in general.Footnote 3
For this reason, it was introduced in phenomenology by Husserl to describe our experience as a relation binding subjects and objects. The subject is directed towards the external object, and the intentionality can be thought as the arrow connecting the two entities. Even if there are many different variations of what intentionality is within phenomenology, we apply such a relational interpretation of intentionality throughout our paper to better analyze how technologies relate to us.
2.1 Technological Intentionality as Generated from the Objects: Passivity and Directedness
The idea that every technological object acts by itself in order to relate to other objects and human subjects is not new in phenomenology. Its roots may be traced in the Husserlian texts like Analysen zur passiven Synthesis (1966) and Erfahrung und Urteil (1939) even if they analyze objects in general and not directly technologies. Husserl suggests we do not just relate to the objects in an active way like when we look at the objects in front of us, but the objects passively relate to us too.Footnote 4 By introducing the concept of passivity within the object, Husserl shows the object acts by constituting itself. According to phenomenology, this process of constitution is performed within the object without any active role played by the subject (Biceaga, 2010; Ferrarin, 2006). In order to better explain this process, Husserl talks about an ‘activity’ within the passivity of the object. Therefore, he suggests that the object is always more than a mere dead entity waiting for our actions because it is active. Moreover, he highlights the subject has no ‘power’ over these actions performed within the object since they are done autonomously by the objects. By constituting itself, the object directly shapes human behaviors, human goals, and preferences as if it were an active entity (de Preester, 2011; de Preester & Tsakiris, 2009; Gallagher, 1995; Liberati, 2017a, 2020a, 2020b). For example, a red apple on a table constitutes itself as a red object, and, through its shape and color, it ‘tempts’ the subject to have a bite (Liberati, 2020a). The apple is not merely a dead object laying on the table, but it is almost alive because it proposes itself to the subject, and it tempts the subject to perform some actions.
The fact that objects might ‘act’ towards the human subjects opens a new perspective on the ‘human-technology’ relationship.Footnote 5 Objects are not ‘neutral’ because they are not dead and inert things around the subject. The object is not a passive entity that depends on the subject, but it is an active ‘thing’ that can dynamically participate in the world.Footnote 6
Thus, by simply looking at the passive synthesis in phenomenology, it is possible to frame the object's presence as something ‘alive’ that directly acts on the subject. Consequently, according to this perspective, objects have a specific ‘intentionality’ pointing to the subject.Footnote 7 The result is an arrow opposite to the one we have shown previously, which generates from inside the object and points to the subject since it constitutes and proposes itself to the human being.
2.2 Technological Intentionality as an Openness
The other important element we can find within the Husserlian phenomenology in relation to the object’s constitution is the presence of the object’s horizons.
According to phenomenology, the object has three different horizons: the world, outer, and inner horizons.
The world horizon relates to the fact the experience is part of the subject’s experience in its totality. It refers to the fact that the experience of the object is not isolated, but it is part of our way of experiencing the world in general (Gauttier & Husserl, 1950; Liberati, 2020a, 2020b). For the interest of this paper, we decided to focus on the other two horizons which directly relate to the aspects of the object more in the specific: the outer and inner horizons.
The outer horizon links the object to what is around it by providing the background of the experience. For example, the table on which the apple is placed is part of the outer horizon since it is not part of the object, but it is part of its background (Geniusas, 2012a, 2012b). Obviously, the table is not an element ‘in’ the object, but, at the same time, it is related to the apple, and it provides some aspects which make that apple different from an apple growing on a tree in terms of activities the subject can perform and the way of being perceived by the subject. Thus, in order to better understand how the subject perceives the object, phenomenology uses the ‘outer horizon’ to include these aspects of the background as constituting elements of the object. The subjects do not make this connection, but it is a connection binding the object to what is around it, which is produced and structured by the object itself. The object acts autonomously as in the case of the inner horizon where the object constitutes itself with all the elements which are not manifest and as in the case of passive synthesis in general where the object actively proposes itself to the subject.
By using the outer horizon, it is possible to illustrate that objects are active by adding the relations binding the object to other objects around them. The intentionality of objects is not merely directed towards the subject but also towards other objects since they connect themselves to the objects around which provide them with details and specific aspects.
Moreover, an object does not have only a background, but it has many inner aspects that the subject cannot grasp all at once. For example, an object like an apple on a table always has another face on its backside, which is not directly accessible by the perceiving subject. The subject must turn the apple or move around it to see the hidden face. Moreover, the apple has elements that need direct actions of the subject to become manifest, like the taste of the apple, which can be perceived by biting the apple and not by merely looking at it. Thus, the apple has hidden aspects like the face on its back and its taste, which are part of what the apple is but not manifest in its perception.Footnote 8 In order to take into consideration these different elements, Husserl proposes the presence of an ‘inner horizon’ which includes all of these hidden inner aspects (Husserl, 1939, 1950, 1976; Jorba, 2019).Footnote 9 According to Husserlian phenomenology, this ‘horizon’ makes the object transcendent because the subject can never perceive all the aspects of an object at once (Wellner, 2020b; Hoffmann, 2019; Liberati, 2016a, 2020a, 2020b). The object is ‘infinite’ because it has more to offer than what is manifest and clear in front of the perceiving subject (Harman, 2008). Thus, thanks to all these elements that are part of the inner horizon, it is possible to think of the object as an infinite resource of different praxes since it constantly shows different elements and changes through time. The subject does not produce this infiniteness, but the object in its constitution generates it.
2.3 Common Elements Between Phenomenology and Postphenomenology
Traces of these phenomenological intuitions related to the horizons can be found within postphenomenology.
The idea of an object in direct relations with other objects around as identified by the ‘outer horizon’ in the Husserlian phenomenology can be found in Verbeek’s work. Peter-Paul Verbeek claims that technological artifacts do not function as entities with essential properties until they enter relations with other technological objects (Verbeek, 2005). This connection to a broader net of other artifacts appears as a necessary condition for ‘stabilizing’ essential components of the object in question. To operate correctly, every technological artifact must be ‘plugged’ into a wider net of other technological objects, and so every object can only be meaningful through a broader context of other artifacts (Verbeek, 2011, 2020). This broader context creates an ecosystem where each object is directed to the other. Consequently, every technological object exists in a broader horizon of other technical objects, just like the outer horizon highlighted by Husserl (Mykhailov, 2020).
Moreover, the transcendence of the object identified by the Husserlian ‘inner horizon’ can be found directly in Ihde’s idea of multistability. Multistability usually includes two significant intuitions. First, every technological object has an ambiguous nature which enables different kinds of perception and praxes (de Boer, 2021; Don Ihde, 1978, 2012). The notion of multistability reveals the essential ‘flexibility’ of the technology (Wellner, 2020a, 2020b; Whyte, 2015; Wiltse, 2020). Technological objects are not ‘stable,’ but they are given to the subjects in relation to their embedded context (Rosenberger, 2014, 2016, 2017; Verbeek, 2020). A good illustration for the last statement may be found in numerous personal devices like computers, smartphones, tablets. These technological objects may be easily personalized by getting a new level of ‘stability’ depending on specific user’s needs (Tossell et al., 2012). Moreover, personal devices are highly multifunctional technological objects. They play a vital role in different social practices, and they might be used for various purposes such as gaming, reading, and working (Irwin, 2005; Wellner, 2011, 2013, 2018). The multifunctional component of personal devices relies on their capability to run different programs.Footnote 10 Consequently, the essential ‘stability’ of these objects remains open to different user’s preferences and social contexts in which these devices are anchored.Footnote 11
Thanks to a phenomenological analysis of the relations binding subjects and objects, we can show the object has a sort of ‘intentionality.’ Firstly, it has activities directed towards the subject as shown by the passive synthesis. Secondly, the object has the power to link autonomously to the other objects around it. Thirdly, the object constantly offers different aspects to the subject thanks to the inner horizon.Footnote 12 Thus, it is possible to introduce the idea of a ‘life’ within technologies manifested by their ‘technological intentionality.’
3 Technological Intentionality in Computer Systems
Even if the actions performed by objects might not be directly visible in usual objects as an apple and analog technologies, they become more prominent in the case of computer technologies.Footnote 13 Computers are usually not seen as ‘dead’ objects. For example, they are commonly perceived as scanning artifacts, which might track objects and people, share the data with other artifacts and in so doing be an active part of our everyday environment (Aarts & de Ruyter, 2009; Aydin et al., 2018; Rapp, 2021). They can interact with people like in the case of social robotics, where robots are designed to have a human-like relationship with humans and the internet of things where objects are designed to react to the presence of people.Footnote 14 They can add digital objects to their surroundings thanks to augmented reality (Laato et al., 2020; Liberati & Nagataki, 2015; Liberati, 2017a, 2018, 2019; Modena et al., 2021). Moreover, today’s computer systems can perform other complex actions. For example, they can learn from the data and accomplish decisions afterward, and this learning ability of computers stands in the center of today’s AI research. In what follows, we produce a phenomenological analysis of unsupervised learning techniques with a special focus on a phenomenological application of the outer horizon to developing programs with programming languages like C++ Footnote 15and Generative Adversarial Model.Footnote 16
3.1 Computer’s Intentionality and the ‘Outer Horizon’: Developing Programs with C++
As we have shown previously, one of the main features of technological intentionality is the directedness towards objects. More importantly, we have illustrated that this component of technological intentionality can be highlighted with the phenomenological notion of ‘outer horizon.’ Technologies are never in a ‘vacuum,’ but they are always immersed in an environment with other objects, and this element can also be applied to computer language like C++ and programs that are being developed within this language.
The object-directedness of the computer’s intentionality can be highlighted in different cases. The first case is related to the digital environment where the program is created. The digital environment (so-called ‘integrated development environment’—IDE) usually includes many other technological ‘objects’ like programs, which are strictly required for writing executable code. For example, it contains the program necessary to display the results and several mathematical functions to help the programmers (Malik, 2015, p. 10). Moreover, to make the programming possible such physical objects like screens, keyboards, mice, and speakers are needed. As Fasoli highlighted, the computer system cannot function properly without these 'additional' peripheral devices (Fasoli, 2018). Thus, C++, like the other programming languages, needs these objects for developing programs, and the computer system creates connections with them where possible.Footnote 17
At the same time, computers have another way to be ‘directed towards’ their objects (e.g. computer programs). Usually, writing a computer program in languages like C++ includes several steps performed by human and non-human agents together.Footnote 18 We can identify four main steps that are taking place between the programmer and the computer system. The first step of the programming process sheds light on the connection between designers’ intentionality and the intentionality of the object itself. As suggested by Malik (2015), the first programming phase is a materialization of the designer’s intentionality into a source code written in C++. The second stage includes transforming the designer’s code into a language that is understandable for the computer system. Because of the abstract nature of the instructions written in the high-level languages, the commands cannot be directly transferred into the machine code, but they need to be translated into the machine’s language to be executed properly (Ibid). This process can be accomplished through other computer programs like a compiler, a linker, and an interpreter. These programs work autonomously from the human designer, and they represent artificial agents, which directly participate in the programming process supporting the desired operation of the system.Footnote 19 The third step manifests the last and final translation into the so-called ‘machine code’ or ‘executable code’ that a computer can use for performing specific tasks. The machine code is a low-level programming language that directly controls the computer’s central processing unit (CPU). All the instructions introduced by the machine code yield a particular set of actions inside the CPU.Footnote 20 The fourth and last stage results in a complete digital object that is perceivable and open to various interactions. Computer systems can ‘translate’ the original code’s language into a perceivable object, and also this transaction is performed autonomously.Footnote 21
It is important to highlight that the last three stages (stages #2, 3, 4) of the programming process operate without any human interaction.Footnote 22 After the source code is written, the computer system has to apply other programs to translate the source code into executable code. Even if other human programmers have designed these programs, the system executes them autonomously, and these programs have been operated from within the object autonomously. The programmers can interact just with the first part of the programming process (e.g., writing the source code), while the computer itself executes three other stages (Malik, 2015).
This kind of autonomous relations binding different programs involved into the programming with C++ to the final program that has been developed with C++ language can be highlighted using the Husserlian notion of the ‘outer’ horizon. In Husserl, the outer horizon enables the object to autonomously link itself to other objects in the surroundings by creating a network among the objects around it. In the case of an apple, the apple connects to the table. Such a connection is out of the power of the subject in the sense it is performed autonomously. In the case of computers, the program written by the subject links autonomously to other running programs (Fig. 1).
Thus, it is possible to think of the program and its connections to other objects in its surroundings in terms of the outer horizon. Such a ‘simple’ change yields significant effects on how we think of the program the human subject is writing. According to the phenomenological perspective, the network connecting the different steps generated while the program is running is not to be perceived as related to the human being who programmed the code. The program acts on its own by autonomously linking to the different objects it needs.
The program developed with C++ is not an inert thing, but through its outer horizon it is active. Thus, we need to introduce this activity as an important constituting part of what the program is and to take into consideration the program as an entity able to produce its intentionality. As a result, the transformation from the code to the digital objects cannot be tackled by just addressing the ‘translation’ from the language of the programmer to the language generating the final object, as many contemporary researchers in the field of digital hermeneutics claim, because the program is an almost ‘alive’ entity which shapes and gives meaning to the process (Possati, 2020; Romele, 2020).
3.2 Computer’s Intentionality and the ‘Inner Horizon’: Generative Adversarial Model in Unsupervised Learning
One of the most significant changes in computing within the last few years is due to several techniques named ‘deep learning,’ which are generally divided into three learning groups: reinforced, supervised, and unsupervised learning (Schmidhuber, 2015). Contemporary machine learning algorithms relate to the computer’s ability to stay constantly open to new actions and to accomplish the learning process without direct human supervision. The system’s appealing feature is the capacity to change its parameters by itself to better adapt to the data environment where it performs (Matthias, 2004). Therefore, the computer system is taken as a real active entity that behaves independently from the designers’ intention since it is able to ‘learn’ and ‘adapt’ its parameters to a dynamic physical and digital environment, at least in the terminology used by programmersFootnote 23 (Floridi, 2013; Mykhailov, 2021).
The unsupervised learning provides new ‘learning’ strategies that are less ‘human-depended’ and gives more autonomy to computers. In contrast to the ‘supervised’ and ‘reinforcement’ learning methods where the designer has either to ‘target’ the data in advance or provide some ‘rewards’ to a computer, the ‘unsupervised’ learning techniques do not use well-prepared data to learn and recognize similar patterns. In unsupervised learning, the system creates its instances of data (Graves & Clancy, 2019). Generative Adversarial Model is one among many different unsupervised learning techniques where the system does not just blindly ‘memorize’ the data, but it creates a particular model of the underlying class from which the data was originally generated. For example, the system does not just have a pre-defined picture of a specific apple, but it creates a set of all pictures of apples. Usually, the Generative Adversarial Model consists of two networks—a generator and discriminator—which permanently interact one with the other during the learning process (Hernández-Orallo, 2020). The generator has to produce realistic images while the discriminator has to specify if the image is real or not. Through the numerous iterations, both networks learn to generate and discriminate data more and more efficiently.
Phenomenology clearly shows how these networks provide elements that directly relate to the fact the object proposes itself to the subject in novel ways. The object is ‘transcendent’ since the object’s inner horizon cannot be grasped by the subject in its totality all at once, but it requires different actions to manifest its different aspects. The ‘Generative Adversarial Model’ has a large number of different algorithmic architectures which differ in patterns, neuron composition, and working parameters (Chui et al., 2018). Thus, it can easily highlight the importance of the inner horizon (Arel et al., 2010) since its digital system is ‘infinite,’ ‘open’ to new transformations, and it ‘transcends’ what can be perceived by the human subject in one perception. The network’s inner horizon consists of layers structured in different ensembles depending on the specific purposes of the system in question. Generally speaking, the ‘Generative Adversarial Model’ manifests a limited amount of hyperparameters (like the type of connection, the behavior of neurons, number of layers) that create a sophisticated topology of thousands of artificial neurons with millions of different parameters altogether (Pasquinelli, 2019). This overcomplexity of the system shows that computers are always more than what the programmer, the designer, and the user merely perceive. The apple on the table has an infinite number of hidden aspects that require multiple perceptions and actions as its inner horizon. Similarly, the computer system has hyperparameters, parameters, types of connections, and logical operations requiring multiple perceptions and actions to manifest. In this way, the computer system embeds novel ways of proposing itself to the world, which makes it rich and always open to new applications.
4 Conclusions
This paper aims to provide a phenomenological analysis of the notion of technological intentionality to better frame the presence of digital technologies in our society by focusing on their inner activities and their inner life. Several schools of thought already study technological intentionality, but this notion still lacks profound phenomenological analysis. Thanks to phenomenology, we can show how objects are ‘alive’ and how they can perform various activities without direct human supervision.
In the first section, we introduced the phenomenological elements relevant to our work. We briefly analyzed the notion of intentionality in its relation between the subject and object. We highlighted the passive synthesis and two horizons as the elements showing the objects’ active role and inner ‘life.’ The object constitutes itself through the passive synthesis, it proposes itself as infinite to the subject thanks to the richness of its inner horizon, and it connects itself to other objects around thanks to its outer horizon.
In the second section, we applied this study to computer systems. The first part of the section focused on the use of unsupervised learning techniques in the case of neural networks. We showed that computer systems are always something more active than what the human subjects might perceive through the use of the inner horizon. The second part of the section was dedicated to programming languages like C++. We showed how it is possible to talk of activities done by the object itself thanks to the use of the notion of the outer horizon in phenomenology. Thus, by combining the two parts of the section, it becomes clear computer systems propose themselves as infinite to the human subjects thanks to their inner horizons, and they autonomously connect themselves to other objects like other programs thanks to their outer horizon.
Thanks to this analysis, it is clear that to understand better the role computer systems have in our society, we cannot limit the analysis to the actions performed by human subjects. This way of approaching the topic might risk missing essential elements because of the autonomous nature of today’s computer systems. We have shown that phenomenology highlights these elements by providing the framework needed to think of computer technologies as ‘alive’ beings. We also show some aspects of the phenomenological approach that are already present in postphenomenology.
Objects are not mere ‘dead’ objects, but they act and shape the world around us. Computers have their technological intentionality, which directly affects human subjects, generated within the object through their inner actions.
Notes
Especially, the work by Verbeek focusses on how technologies interact with humans by changing the way people live and relate to the world (Verbeek, 2008b).
In relation to this topic it is useful to refer to Deniel Dennet’s theory of ‘as if’ intentionality that forms the backbone of his ‘heterophenomenology’ (Dennett, 1987, 1971, 1996). According to this theory “[t]he intentional stance is the strategy of interpreting the behavior of an entity (person, animal, artifact, whatever) by treating it as if it were a rational agent who governed its ‘choice’ of ‘action’” (Dennett, 1996, p. 27). For more on relation between Husserl’s and Dennet’s theories of intentionality see, (Haack, 2016; Meixner, 2006).
Brentano’s position on intentionality highlighted above also aimed to answer the question of how it is possible to think about non-existing objects. If intentionality has a relational nature, how does it relate to things that can’t exist? For example, if I am thinking about a unicorn, my thought has an intentional object that does exist. However, the object “unicorn” is not a real existing object in the world (Segal, 2007). In such a way, with the notion of intentionality Brentano tried to distinguish mental from physical phenomena. Further Husserl has provided his own answer to this problem. For more on this topic see (Crane, 2006).
This “small” step represents an important element in the development of Husserlian phenomenology since it shows how the object is active in the intentional relation binding subject and object (Dahlstrom, 2007; Husserl, 1939). Arts, especially thanks to new materialism, provide valuable examples of intentionality originated within the object (Jiaying, 2018).
It is possible to tackle the distinction between objects and technologies from within the Husserlian perspective since Husserl talks about objects and technologies to change the perception of the objects (Husserl, 1952; Liberati, 2016a, 2016b). Moreover, he talks of aesthetic objects as possible mediators of how we look at the world (Husserl, 1980; Liberati, 2018; Lotz, 2007; Uzelac, 1998; Warren, 2010). However, this paper wants to show how technologies are “alive” as other objects are, and so it focuses on the elements in common between digital technologies and objects more than on their differences.
This position is present not only in phenomenology, but it goes in line with other approaches close to the philosophy of technology like new materialism in Arts, which talks about 'non-human agencies' and ‘vibrant matter’. For more on this problem see (Bennett, 2010; Liberati, 2021a; Orlie, 2010). Another interesting relation to the topic of technological intentionality may be found in New Realism proposed by Maurizio Ferraris (2014), especially in his theory of social objects (which includes technological artifacts). Ferraris makes a claim that ‘object = inscribed act’ and technological objects can form and shape the intentions of users (pp. 55–56). In such a way, we can see that the theory of technological intentionality, proposed in the present article, has a lot of correlations with a realistic approach in contemporary philosophy as it can describe the foundations of objectivity (e.g. technological intentionality) and clarify the underlying correlations between them.
This relation focuses on the interactions between “subjects” and “objects”. It is also possible to extend this relation to other entities like the environment. In this case, the object would point to the environment instead of the subject.
As postphenomenology shows, the multistability relates also to the possible different meanings of the objects.
The additional phenomenological variation inside the domain of multistability relates to the difference between the technological experience of an amateur and an expert. To put it simply, the expert could experience many hidden layers of the object, while for the amateur, these layers usually remain hidden. For more on this see (Hayler, 2015).
In Sect. 1.1, we showed through the use of Husserlian perspective how the object can be “autonomous” in relating to the subjects.
Even if the definition of computers might be problematic since it touches different elements, for the sake of this paper, we consider computers in their inseparable relations between hardware and software (Coeckelbergh, 2020, pp. 69–70).
For example, there are robots designed to mimic the presence of another living being like a seal or a person which clearly show how digital technologies can be perceived as something more than dead objects. For more on this see (Hansen, 2006; Jecker, 2020; Liberati, 2017b, 2021b; Viik, 2020; Weiser, 1993; Yamaguchi, 2020).
In the present article, we refer to C++ programming language as it is one of the most widely used programming languages nowadays. However, our phenomenological analysis of C++ may also be extended to other programming languages like Python with the appropriate modifications since they might share similar features.
We have chosen Generative Adversarial Model because of its ‘double networks’ architecture. Two networks are permanently interacting within the learning process. The phenomenological analysis of this ‘interactive’ element helps to reveal the model’s inner horizon, and so it provides us with better conceptual evidence on the nature of technological intentionality in machine learning techniques.
Integrated development environment (IDE), required for developing programs with C++, may provide us with another conceptual dimension in the reference to James Gibson’s ‘affordance theory’ (Gibson, 2015). According to Gibson’s theory, primary elements that animals perceive in the environment are affordances. The latter may be seen as potentialities for further actions or deeds (de Boer, 2021). IDE, which is a digital environment for developing different programs with C++, represents the same environment (or, in terms of Gibson’s theory—‘ecological niche’) for possible (or prohibited) actions. However, in the case of IDE, instead of animals we are talking about digital agents who also interact with their environment in correspondence with Gibson’s theory by having their own ‘ecological niches’ and affordence/prohibitions patterns.
In relation to this idea it is worth referring to Giddens’ theory of structuration (1984) that has a deep impact on affordance theory (Conole & Dyke, 2004) and also play a vital role in today’s field of human–computer interactions (Vyas et al., 2017). Giddens’ structuration theory is tightly related to the structural properties of technology by suggesting that every technology simultaneously constrain and enable human actions. This theory can be related to technological intentionality as it focuses on the inner components of technologies and highlights its structural ambivalences.
Such a relation can be easily extended since these programs act autonomously even in relation to the designers who created different programs interacting with them.
It is important to point out that some elements of the programming process (together with components of the IDE) described in this section with reference to C++ are also required for GAN. The latter will be in the focus of our next section. However, it seems important to stress some technological similarities between these two elements in order to make the further phenomenological analysis more precise. Additionally, in the case of GAN we can also highlight such elements of the programming process as data collection, data preparation, etc.
In relation to technological intentionality within programming with C++ we have to keep in mind the distinction between three types of objects in addition to the programming language (C++) as a set of instructions for a computer to execute. The first type of object is a computer. The second type of objects includes several supplementary programs that are being used for translating a code into a machine language (e.g. machine code). The third type of object is the final program.
There is another important difference between information and non-information related artifacts that is worth mentioning here. The non-information objects could not change their operation rules, while the computer programs have some level of autonomy, and their behavior could not be programmed totally (as, for instance, in the case of machine learning algorithms) (Berkich, 2017). It means that the system’s behavior may reach some non-deterministic states, which were not programmed by their designers explicitly (Johnson, 2006). For example, on developing machine learning programs with C++ see (Kolodiazhnyi, 2020). As the book shows in Chapter 10 “Neural Networks for Image Classification” in relation to the use of C++ libraries to create an artificial neural network, we can have a network able to learn by changing its own parameters (like neuron weights) within the course of its operation to accomplish specific tasks like image classification, data structuring, and providing recommendations. Today such networks are widely used in many social domains ranging from decision-supporting programs in diagnosis in medicine (decision-supporting programs in diagnosis) to automated decision making in urban infrastructures (automated decision making).
In relation to this topic, it is worth mentioning Floridi & Sanders’ approach to the artificial moral agency as a significant conceptual development in the field of ‘mind-less morality’ (Floridi & Sanders, 2004). However, their approach is more related to the ethical implications of the artificial agency, and that is why it is out of scope for the current article.
References
Aarts, E., & de Ruyter, B. (2009). New research perspectives on ambient intelligence. Journal of Ambient Intelligence and Smart Environments, 1(1), 5–14. https://doi.org/10.3233/AIS-2009-0001
Arel, I., Rose, D., & Karnowski, T. (2010). Deep machine learning—A new frontier in artificial intelligence research. IEEE Computational Intelligence Magazine, 5(4), 13–18. https://doi.org/10.1109/MCI.2010.938364
Aydin, C., González Woge, M., & Verbeek, P.-P. (2018). Technological environmentality: Conceptualizing technology as a mediating milieu. Philosophy and Technology, 32(2), 321–338. https://doi.org/10.1007/S13347-018-0309-3
Bennett, J. (2010). Vibrant matter. Duke University Press. https://doi.org/10.1515/9780822391623
Berkich, D. (2017). The problem of original agency. Southwest Philosophy Review, 33(1), 75–82. https://doi.org/10.5840/swphilreview20173318
Berkich, D. (2018). Machine intentions. The APA Newsletter on Philosophy and Computers, 18(1), 3–10.
Biceaga, V. (2010). The concept of passivity in Husserl’s phenomenology, contributions to phenomenology. Springer.
Chalmers, D. J. (1996). The conscious mind. Oxford University Press.
Chui, M., Manyika, J., Miremad, M., Henke, N., Chung, R., Nel, P., & Malhotra, S. (2018). Notes from the Ai frontier insights from hundreds of use cases.
Coeckelbergh, M. (2020). AI ethics. The MIT Press. https://mitpress.mit.edu/books/ai-ethics〹
Conole, G., & Dyke, M. (2004). What are the affordances of information and communication technologies? Research in Learning Technology, 12(2), 113–124. https://doi.org/10.3402/RLT.V12I2.11246
Conty, A. (2013). Techno-phenomenology: Martin Heidegger and Bruno Latour on how phenomena come to presence. South African Journal of Philosophy, 32(4), 311–326. https://doi.org/10.1080/02580136.2013.865099
Crane, T. (2006). Brentano’s concept of intentional inexistence. In M. Textor (Ed.), The Austrian contribution to analytic philosophy (pp. 1–20).
Crane, T. (2010). Is there a perceptual relation? In Perceptual experience (pp. 126–146). Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199289769.003.0004
Dahlstrom, D. (2007). The intentionality of passive experience: Husserl and a contemporary debate. New Yearbook for Phenomenology and Phenomenological Philosophy, 7, 25–42.
de Boer, B. (2021). Explaining multistability: Postphenomenology and affordances of technologies. AI and Society, 2021, 1–11. https://doi.org/10.1007/S00146-021-01272-3
de Preester, H. (2011). Technology and the body: The (Im)possibilities of re-embodiment. Foundations of Science, 16(2–3), 119–137. https://doi.org/10.1007/s10699-010-9188-5
de Preester, H., & Tsakiris, M. (2009). Body-extension versus body-incorporation: Is there a need for a body-model? Phenomenology and the Cognitive Sciences, 8(3), 307–319. https://doi.org/10.1007/s11097-009-9121-y
de Warren, N. (2010). Tamino’s Eyes. Pamina’s Gaze: Husserl’s Phenomenology of Image-Consciousness Refashioned. https://doi.org/10.1007/978-94-007-0071-0_12
Dennett, D. C. (1971). Intentional systems. Journal of Philosophy, 68(4), 87–106. https://doi.org/10.2307/2025382
Dennett, D. (1987). The intentional stance. The MIT Press.
Dennett, D. C. (1996). Kinds of minds. Basic Books.
Dretske, F. (1995). Naturalizing the mind . The MIT Press. https://mitpress.mit.edu/books/naturalizing-mind
Fasoli, M. (2018). Super artifacts: Personal devices as intrinsically multifunctional, meta-representational artifacts with a highly variable structure. Minds and Machines, 28(3), 589–604. https://doi.org/10.1007/s11023-018-9476-3
Ferrarin, A. (2006). Passive synthesis and life-world. AA.VV.
Ferraris, M., & De Sanctis, S. (2014). Manifesto of new realism. In SUNY series in contemporary Italian philosophy. State University of New York Press.
Floridi, L. (2013). The ethics of information. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199641321.001.0001
Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d
Fodor, J. A. (2003). Concepts. Oxford University Press. https://doi.org/10.1093/0198236360.001.0001
Franz, B. (1973). Psychology from an Empirical Standpoint—1st Edition—Franz Brentan. Routledge.
Gallagher, S. (1995). Body schema and intentionality. MIT Press.
Gauttier, S., & Liberati, N. (2020). Exploring the relation between Q methodology and phenomenology: Designing conditions of instruction based on the phenomenological concepts of variation and horizons. Operant Subjectivity, 42, 33–57. https://doi.org/10.15133/J.OS.2020.002
Geniusas, S. (2011). William James and Edmund Husserl on the horizontality of experience. Transcendentalism Overturned. https://doi.org/10.1007/978-94-007-0624-8_36
Geniusas, S. (2012a). Origins of the horizon in Husserl’s phenomenology, contributions to phenomenology. Springer.
Geniusas, S. (2012b). The world-horizon in ideas I. In The origins of the horizon in Husserl’s phenomenology (Vol. 67, pp. 55–64). Springer. https://doi.org/10.1007/978-94-007-4644-2_4
Gibson, J. (2015). The ecological approach to visual perception. Psychology Press.
Giddens, A. (1984). The constitution of society : Outline of the theory of structuration. In The constitution of society : Outline of the theory of structuration. Polity Press.
Graves, A., & Clancy, K. (2019). Unsupervised learning: The curious pupil. https://deepmind.com/blog/article/unsupervised-learning
Haack, D. (2016). The epoche and the intentional stance. Journal of Cognition and Neuroethics, 4(1), 27–44.
Hansen, M. B. N. (2006). Bodies in code: Interfaces with digital media. In Bodies in code: Interfaces with digital media. Routledge Taylor & Francis Group. https://doi.org/10.4324/9780203942390
Harman, G. (2008). On the horror of phenomenology: Lovecraft and Husserl. COLLAPSE, 4, 333–364.
Hayler, M. (2015). Challenging the phenomena of technology. In Challenging the phenomena of technology. Palgrave Macmillan. https://doi.org/10.1057/9781137377869
Hernández-Orallo, J. (2020). Twenty years beyond the turing test: Moving beyond the human judges too. Minds and Machines, 30(4), 533–562. https://doi.org/10.1007/S11023-020-09549-0
Hoffmann, A. L. (2019). Where fairness fails: Data, algorithms, and the limits of antidiscrimination discourse. Information Communication and Society, 22(7), 900–915. https://doi.org/10.1080/1369118X.2019.1573912
Husserl, E. (1939). Erfahrung und Urteil: Untersuchungen zur Genealogie der Logik. Allen and Unwin.
Husserl, E. (1950). Ideen zu Einer Reinen Phänomenologie und Phänomenologischen Philosophie. In Ideen zu Einer Reinen Phänomenologie und Phänomenologischen Philosophie. Springer. https://doi.org/10.1007/978-94-010-1041-2
Husserl, E. (1952). Ideen zu einer reinen Phänomenologie und phänomenologischen Philosophie : Allgemeine Einführung in die reine Phänomenologie. De Gruyter. https://doi.org/10.1515/9783110916096
Husserl, E. (1966). Husserliana XI. Analysen zur passiven Synthesis : aus Vorlesungs- u. Forschungsms. (1918–1926) (M. Fleischer (ed.)).
Husserl, E. (1976). Die Krisis der Europäischen Wissenschaften und die Transzendentale Phänomenologie. 6. https://doi.org/10.1007/978-94-010-1335-2
Husserl, E. (1980). Phantasie, Bildbewusstsein, Erinnerung. In Phantasie, Bildbewusstsein, Erinnerung. Springer. https://doi.org/10.1007/978-94-009-8781-4
Ihde, D. (1978). Technics and praxis (Vol. 24). Springer. https://doi.org/10.1007/978-94-009-9900-8
Ihde, D. (1990). Technology and the lifeworld. Indiana University.
Ihde, D. (2012). Experimental phenomenology (2nd ed.). State University of New York Press.
Irwin, S. (2005). Technological other/quasi other: Reflection on lived experience. Human Studies, 28(4), 453–467. https://doi.org/10.1007/s10746-005-9002-5
Jacob, P. (2019). Intentionality (Stanford Encyclopedia of Philosophy). In E. N. Zalta (Ed.), Sandford encyclopedia of philosophy. https://plato.stanford.edu/entries/intentionality/
Jecker, N. S. (2020). You’ve got a friend in me: Sociable robots for older adults in an age of global pandemics. Ethics and Information Technology. https://doi.org/10.1007/s10676-020-09546-y
Jiaying, C. (2018). Post internet art inside and outside the Chinternet. In H. Sunquan (Ed.), Force of reticulation: Essays of first annual conference of network society. China Academy of Art.
Johnson, D. G. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology, 8(4), 195–204. https://doi.org/10.1007/s10676-006-9111-5
Johnson, D. G., & Miller, K. W. (2008). Un-making artificial moral agents. Ethics and Information Technology, 10(2–3), 123–133. https://doi.org/10.1007/s10676-008-9174-6
Jorba, M. (2019). Husserlian horizons, cognitive affordances and motivating reasons for action. Phenomenology and the Cognitive Sciences, 19(5), 847–868. https://doi.org/10.1007/S11097-019-09648-Z
Kaplan, D. (1978). Dthat. In P. Cole (Ed.), Syntax and semantics (Vol. 9, pp. 221–243). Academic Press.
Kolodiazhnyi, K. (2020). Hands-on machine learning with C++. Packt Publishing.
Laato, S., Hyrynsalmi, S., Rauti, S., Islam, A. K. M. N., & Laine, T. H. (2020). Location-based games as exergames-from Pokémon to the wizarding world. International Journal of Serious Games, 7(1), 79–95. https://doi.org/10.17083/ijsg.v7i1.337
Latour, B., & Venn, C. (2002). Morality and technology. Theory, Culture and Society, 19(5–6), 247–260. https://doi.org/10.1177/026327602761899246
Liberati, N. (2018). Being Riajuu [ ]: A Phenomenological Analysis of Sentimental Relationships with “Digital Others.” Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 10715 LNAI, 12–25. https://doi.org/10.1007/978-3-319-76369-9_2
Liberati, N. (2016a). Augmented reality and ubiquitous computing: The hidden potentialities of augmented reality. AI and Society, 31(1), 17–28. https://doi.org/10.1007/s00146-014-0543-x
Liberati, N. (2016b). Technology, phenomenology and the everyday world: A Phenomenological analysis on how technologies Mould our world. Human Studies, 39(2), 189–216. https://doi.org/10.1007/s10746-015-9353-5
Liberati, N. (2017a). Teledildonics and new ways of “being in touch”: a phenomenological analysis of the use of haptic devices for intimate relations. Science and Engineering Ethics, 23(3), 801–823. https://doi.org/10.1007/s11948-016-9827-5
Liberati, N. (2017b). Phenomenology, pokémon go, and other augmented reality games a study of a life among digital objects. Human Studies, 41(2), 211–232. https://doi.org/10.1007/s10746-017-9450-8
Liberati, N. (2019). Emotions and digital technologies. The effects digital technologies will have on our way of feeling emotions according to post-phenomenology and mediation theory. Mente Journal of Philosophical Studies, 36(36), 292–309.
Liberati, N. (2020a). Making out with the world and valuing relationships with humans Mediation theory and the introduction of teledildonics. Paladyn, 11(1), 140–146. https://doi.org/10.1515/pjbr-2020-0010
Liberati, N. (2020b). The Borg–eye and the We–I. The production of a collective living body through wearable computers. AI and Society, 35(1), 39–49. https://doi.org/10.1007/s00146-018-0840-x
Liberati, N. (2021a). La Vita Nell’oggetto In Fenomenologia, Postfenomenologia, Nuovo Materialismo E Arte. https://endoxai.net/2021a/01/16/la-vita-nelloggetto-in-fenomenologia-postfenomenologia-nuovo-materialismo-e-arte/
Liberati, N. (2021b). Phenomenology and sexrobots. A phenomenological analysis of sexrobots, threesome, and love relationships. International Journal of Technoethics, 12(2), 86–97.
Liberati, N., & Nagataki, S. (2015). The AR glasses’ “non-neutrality”: Their knock-on effects on the subject and on the giveness of the object. Ethics and Information Technology, 17(2), 125–137. https://doi.org/10.1007/s10676-015-9370-0
Loewer, B. (1987). From information to intentionality. Synthese, 70(2), 287–317. https://doi.org/10.1007/BF00413940
Lohmar, D. (2012). Language and non-linguistic thinking. In D. Zahavi (Ed.), The Oxford handbook of contemporary phenomenology (pp. 377–399). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199594900.013.0019
Lotz, C. (2007). Depiction and plastic perception. A critique of Husserl’s theory of picture consciousness. Continental Philosophy Review, 40(2), 171–185. https://doi.org/10.1007/S11007-007-9049-2
Malik, D. (2015). C++ programming: Program design including data structures. Cengage Learning.
Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175–183. https://doi.org/10.1007/s10676-004-3422-1
Meixner, U. (2006). Classical intentionality. Erkenntnis, 65(1), 25–45. https://doi.org/10.1007/S10670-006-9013-2
Miller, K. W., Wolf, M. J., & Grodzinsky, F. (2017). This “ethical trap” is for roboticists, not robots: On the issue of artificial agent ethical decision-making. Science and Engineering Ethics, 23(2), 389–401. https://doi.org/10.1007/s11948-016-9785-y
Modena, E., Pinotti, A., & Pirandello, S. (2021). Virtual reality and augmented reality new tools for art and politics. Paradigmi, 39(1), 87–106. https://doi.org/10.30460/100230
Mykhailov, D. (2020). The phenomenological roots of technological intentionality: A postphenomenological perspective. Frontiers of Philosophy in China, 15(4), 612–635. https://doi.org/10.3868/s030-009-020-0035-6
Mykhailov, D. (2021). A moral analysis of intelligent decision-support systems in diagnostics through the lens of Luciano Floridi’s information ethics. Human Affairs, 31(2), 149–164. https://doi.org/10.1515/humaff-2021-0013
Orlie, M. A. (2010). Impersonal matter. In D. Coole & S. Frost (Eds.), New materialisms: ontology, agency, and politics (pp. 116–136). Duke University Press.
Overgaard, S. (2004). Husserl and Heidegger on being in the world. Springer. https://doi.org/10.1007/978-1-4020-2239-5_1
Pasquinelli, M. (2019). How a machine learns and fails-a grammar of error for artificial intelligence. Spheres; Journal for Digital Cultures, 5, 1–17. https://doi.org/10.2139/ssrn.3078224
Possati, L. M. (2020). Towards a hermeneutic definition of software. Humanities and Social Sciences Communications, 7(1), 1–11. https://doi.org/10.1057/s41599-020-00565-0
Rapp, A. (2021). Wearable technologies as extensions: A postphenomenological framework and its design implications. Human-Computer Interaction. https://doi.org/10.1080/07370024.2021.1927039
Romele, A. (2020). Digital hermeneutics: Philosophical investigations in new media and technologie. Routledge.
Rosenberger, R. (2014). Multistability and the agency of mundane artifacts: From speed bumps to subway benches. Human Studies, 37(3), 369–392. https://doi.org/10.1007/s10746-014-9317-1
Rosenberger, R. (2016). Husserl’s missing multistability. Techne: Research in Philosophy and Technology, 20(2), 153–167. https://doi.org/10.5840/techne20168356
Rosenberger, R. (2017). On the hermeneutics of everyday things: Or, the philosophy of fire hydrants. AI and Society, 32(2), 233–241. https://doi.org/10.1007/s00146-016-0674-3
Schmid, H. B. (2012). Sharing in truth: Phenomenology of epistemic commonality. In D. Zahavi (Ed.), The Oxford handbook of contemporary phenomenology. Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199594900.013.0020
Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85–117. https://doi.org/10.1016/j.neunet.2014.09.003
Schnell, A. (2002). Das Problem der Zeit bei Husserl. Eine Untersuchung über die husserlschen Zeitdiagramme. Husserl Studies, 18, 89–122. https://doi.org/10.1023/A:1015579408870
Searle, J. R. (1983). Intentionality. Cambridge University Press. https://doi.org/10.1017/CBO9781139173452
Segal, G. (2007). Intentionality. In F. Jackson & M. Smith (Eds.), The Oxford handbook of contemporary philosophy. Oxford University Press. https://doi.org/10.1093/OXFORDHB/9780199234769.003.0011
Sokolowski, R. (1974). Identities in manifolds: A Husserlian pattern of thought. Research in Phenomenology, 4, 63–79.
Steinbock, A. J. (1997). Back to the things themselves. Human Studies, 20(2), 127–135. https://doi.org/10.1023/a:1005350727295
Tossell, C. C., Kortum, P., Shepard, C., Rahmati, A., & Zhong, L. (2012). An empirical analysis of smartphone personalisation: Measurement and user variability. Behaviour and Information Technology, 31(10), 995–1010. https://doi.org/10.1080/0144929X.2012.687773
Uzelac, M. (1998). Art and phenomenology in Edmund Husserl. Axiomathes1997, 9(1), 7–26. https://doi.org/10.1007/BF02681700
Verbeek, P. (2005). What things do. Philosophical reflections on technology, agency, and design. Penn State University Press.
Verbeek, P. P. (2008a). Obstetric ultrasound and the technological mediation of morality: A postphenomenological analysis. Human Studies, 31(1), 11–26. https://doi.org/10.1007/s10746-007-9079-0
Verbeek, P. P. (2008b). Cyborg intentionality: Rethinking the phenomenology of human-technology relations. Phenomenology and the Cognitive Sciences, 7(3), 387–395. https://doi.org/10.1007/S11097-008-9099-X/FIGURES/1
Verbeek, P. (2011). Moralizing technology: Understanding and designing the morality of things. University of Chicago Press.
Verbeek, P. P. (2015). Cover story: Beyond interaction: A short introduction to mediation theory. Interactions, 22(3), 26–31. https://doi.org/10.1145/2751314
Verbeek, P. P. (2020). Politicizing postphenomenology. In Philosophy of ENGINEERING AND TECHNOLOGY (Vol. 33, pp. 141–155). Springer. https://doi.org/10.1007/978-3-030-35967-6_9
Viik, T. (2020). Falling in love with robots: A phenomenological study of experiencing technological alterities. Paladyn, Journal of Behavioral Robotics, 11(1), 52–65. https://doi.org/10.1515/PJBR-2020-0005
Vyas, D., Chisalita, C. M., & Dix, A. (2017). Organizational affordances: A structuration theory approach to affordances. Interacting with Computers, 29(2), 117–131. https://doi.org/10.1093/IWC/IWW008
Waldenfels, B. (2012). Responsive ethics. In D. Zahavi (Ed.), The Oxford handbook of contemporary phenomenology (pp. 423–442). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199594900.013.0021
Weiser, M. (1993). Some computer science issues in ubiquitous computing. Communications of the ACM, 36(7), 75–84. https://doi.org/10.1145/159544.159617
Wellner, G. (2011). Wall-window-screen: How the cell phone mediates a worldview for us 1. Humanities and Technology Review, 30, 87–103.
Wellner, G. (2013). No Longer a Phone. Transfers, 3(2), 70–88. https://doi.org/10.3167/trans.2013.030205
Wellner, G. (2018). From cellphones to machine learning. A shift in the role of the user in algorithmic writing. Towards a Philosophy of Digital Media. https://doi.org/10.1007/978-3-319-75759-9_11
Wellner, G. (2020a). The multiplicity of multistabilities. Turning multistability into a multistable Concept. In G. Miller & A. Shew (Eds.), Reimagining philosophy and technology, reinventing ihde. Philosophy of engineering and technology (Vol. 33, pp. 105–122). Cham: Springer. https://doi.org/10.1007/978-3-030-35967-6_7
Wellner, G. (2020b). Digital subjectivity: From a network metaphor to a layer-plateau model. Azimuth, 14, 55–66.
Whyte, K. (2015). What is multistability? A theory of the keystone concept of postphenomenological research. In R. P. Crease & J. K. B. O. Friis (Eds.), Technoscience and postphenomenology: The Manhattan papers (pp. 69–81). Lexington Books.
Willis, P. (2001). The “things themselves” in phenomenology. Indo-Pacific Journal of Phenomenology, 1(1), 1–12. https://doi.org/10.1080/20797222.2001.11433860
Wiltse, H. (2020). Revealing relations of fluid assemblages. In H. Wiltse (Ed.), Relating to things: Design, technology and the artificial (pp. 239–253). Bloomsbury Visual Arts.
Yamaguchi, H. (2020). “Intimate relationship” with “virtual humans” and the “socialification” of familyship. Paladyn, Journal of Behavioral Robotics, 11(1), 357–369. https://doi.org/10.1515/PJBR-2020-0023
Zalta, E. N. (1991). Intensional logic and the metaphysics of intentionality. The Philosophical Review JSTOR. https://doi.org/10.2307/2185073
Acknowledgements
The research for this paper of Dr. Dmytro Mykhailov has been supported by the Major Program of National Fund of Philosophy and Social Science of China (Number 19ZDA040) “The philosophy of technological innovations and the practical logic of Chinese independent innovation” (技术创新哲学与中国自主创新的实践逻辑研究). Both authors contributed equally to the realization of the paper.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Mykhailov, D., Liberati, N. A Study of Technological Intentionality in C++ and Generative Adversarial Model: Phenomenological and Postphenomenological Perspectives. Found Sci 28, 841–857 (2023). https://doi.org/10.1007/s10699-022-09833-5
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10699-022-09833-5