1 Introduction

In the last 17 years, research in computational design and robotic fabrication in architecture, engineering, and construction (AEC) has made remarkable advances. These advances have introduced a variety of approaches using diverse robotic and material systems, ranging from complex timber construction (Leung et al. 2021; Wagner et al. 2020), 3D printing with concrete (Anton et al. 2021; Gosselin et al. 2016), to autonomous brick assembly (Dörfler et al. 2016; Bonwetsch et al. 2007) both on and off-site construction scenarios. Robotic fabrication has enabled rapid and precise production with increased construction customization, accuracy, and process reliability in various work environments and scales (Gramazio et al. 2014). Since construction robots have been designed and programmed for relatively static work environments and predefined processes, most robotic processes require the robots to run in work cells, free from humans and unpredictable disturbances. Once the robot is programmed, the environment and objects that the robot interacts with are expected to remain within the same range of variance that the robot was programmed to. Therefore, especially in unstructured environments, such as construction sites, the level of robustness and autonomy of such robotic processes is still remarkably low (Edsinger and Kemp 2007). Due to this low level of robustness and autonomy, these robots still rely on human operators to make critical decisions or assist the robotic fabrication process (Moniz and Krings 2016). Moreover, the lack of autonomy limits the robots’ ability to seamlessly and reliably interact and collaborate with humans, thereby missing out on the benefit of leveraging complementary skills. Research has not yet focused enough on complementary workflows and human-in-the-loop processes in AEC, leading to a lack of working and intuitive interfaces for robotic fabrication processes that enable seamless communication and data exchange between humans (Aryania et al. 2012). This deficiency, in turn, limits new design and manufacturing opportunities and delays the wider adoption and integration of robotic fabrication into AEC.

This research addresses this limitation by examining how to leverage the development of cooperative and semi-autonomous manufacturing systems between humans and robots. It focuses on hybridizing robotic fabrication with traditional manual workflows, developing a balanced human–machine collaboration system that can enable novel, intelligent and economical workflows for AEC. Such workflows make equal use of human and machine capabilities—the autonomous and interactive capabilities of robots, such as robotic precision and computational iterations, and human cognitive and physical abilities, such as manual dexterity, material knowledge, and intuition. The research findings of this paper emerged through physical experimentation and a proof-of-concept prototype of a complex wooden structure with rope joints (Fig. 1).

Fig. 1
figure 1

Cooperative assembly scenario by human–robot teams

The cooperative assembly workflow is designed for a dually augmented human–robot team involving two mobile robots and two humans. A shared digital-physical workspace is established to facilitate cooperative assembly tasks distributed between humans and robots. Humans can initiate assembly cycles and take turns with the robot to construct wooden Y-triplet units. These units are made of three struts—one from a previous triplet, one assembled by the robot, and one manually assembled by a human. The manually assembled element can be placed freely, following a set of local rules that influence the design of the structure on the fly. These additions made by humans to the built structure need to be continuously digitized and fed into the digital model, from which robot routines can be derived successively. Therefore, this research utilizes recent advancements in mobile augmented reality (AR) technology and sensor-enabled context-awareness (Sandy and Buchli 2018) to track and detect such manually added changes to the built structure automatically and precisely. Alternatively, the collaborative robots themselves are utilized as precision instruments that are used by humans to track and register manual changes. In this paper, we evaluate the accuracy of both tracking methods to understand how humans and robots can collaborate in assembling a large-scale structure.

The remainder of this paper is structured as follows. Section 2 provides an overview of the state of the art of non-linear design-to-fabrication workflows and hybridized robotic and manual fabrication processes in AEC. Section 3 introduces a set of digital tools enabling a novel cooperative assembly workflow of a wooden structure in a shared geometric workspace between humans and robots. It presents a system walkthrough and the hardware and software components. Section 4 presents the results and current limitations. Section 5 concludes the research, addresses the technical challenges of the case study, and concludes with an outlook on future research directions. In summary, this paper discusses how—by getting humans and machines to communicate with one another—the notion of a hybrid human–robot work team could open new avenues for digital fabrication in architecture.

2 Background

The following sections illustrate how our study expands on previous work by investigating non-linear design-to-fabrication workflows and cooperative fabrication processes in digital fabrication. Specifically, this research focuses on hybridizing robotic and manual construction techniques by human–robot teams.

2.1 Nonlinear design-to-fabrication workflows

Most digital design-to-fabrication workflows in architecture are linear due to the explicit nature of machine instructions, thus requiring most design decisions to be made prior to fabrication. Traditional craft processes, on the contrary, are not necessarily linear but rather encourage practitioner creativity by not entirely specifying the path of execution (Knight 2018). To incorporate this non-linear approach, Knight and Stiny (Knight and Stiny 2015) introduced a computational theory of making grammars, which has expanded the theory of shape grammars (Knight 2015) for the study and digital representation of the temporal performance of craft. They articulated the fundamental creative processes of craft by segmenting spatial and temporal aspects and by applying rules to both the act of creation and sensory perception. They understood crafting as “doing and sensing with stuff to make things”. Such procedures are open and do not entirely predetermine the result. Ultimately, through sensory perception, they enable practitioners to make changes to plans, for instance, to make design adjustments, pursue new design ideas, or accommodate mistakes. Further concepts for non-linear digital fabrication workflows utilizing user input technologies have been demonstrated in the last few years. Interactive Fabrication (Willis et al. 2010) presents how users can control the digital fabrication of a physical form using real-time input devices. Another example of such a process is IRoP (Mitterberger et al. 2022), an interactive augmented robotic plaster spraying process. In Interlacing (Dörfler et al. 2013), a robot makes a design decision within a constrained design space based on 2D camera tracking. Prototype-as-Artifact (Atanasova et al. 2020) presents core concepts of non-linear design-to-fabrication workflows. This research explores the possibilities of making bottom-up design decisions while building in a human–robot cooperative setting. However, the task distribution allocated to humans and robots was interchangeable and not explicitly tuned to their unique strengths.

2.2 Toward hybridizing robotic and manual fabrication

As has been explored in previous research, humans and robots have different strengths (Haddadin et al. 2011; Patel et al. 2017), and cooperative processes should make the most of this by tailoring the role each agent plays. There are diverse strategies for such task distribution in cooperative processes (Fiebich et al. 2015). In this research, we focus mainly on task distribution, where a machine assists a human while fabricating, a process we define as machine-assisted human fabrication.Footnote 1 In this case, human fabrication or "human-made" no longer applies only to handcrafted objects. Rather, machine-assisted human fabrication also incorporates partially automated processes whose physical output is still dependent on the human craftsperson who oversees and participates in the fabrication (Mitterberger 2021). Only a few research projects combine difficult-to-automate manual fabrication tasks with robotic fabrication tasks. The research, iHRC (Amtsberg et al. 2021), introduces a workflow that enables workers to decide actively on the human–machine cooperative task distribution. The human worker can take over specific process parts, such as picking and placing timber slats or slat fixation, or give these tasks to the machine. Another human–robot cooperative building process is the Hive pavilion (Lafreniere et al. 2016; Vasey et al. 2016). The live building process coordinates multiple human workers via a phone-based app that provides humans with instructions to locate materials and respond to commands like tightening mechanical ratchets, placing finished elements, and supervising fabrication quality. Another example of combining robotic and manual processes is CRoW (Kyjanek et al. 2019). In turn-taking tasks, a user equipped with an AR headset can plan the placement of wooden elements by assessing the fabrication data beforehand. Subsequently, the robotic arm places the wooden plank, and the operator nails it manually. These projects show how such hybridization of automated processes and manual construction techniques can increase the flexibility and robustness of automated workflows. However, in these projects, the tasks selected for the human operator do not require extensive dexterity or elaborated context perception.

Hybridizing robotic fabrication with manual tasks also includes the combination of manual esthetics with automated processes. Projects such as RobotSculptor (Ma et al. 2020) or Adaptive Robotic Carving (Brugnaro and Hanna 2018) allow the results of an automated process to achieve a handcrafted look. These researches show the potential of combining predefined robotic processes with human dexterity. However, the design in these projects is finished before fabrication starts, and therefore, these processes do not fully embrace the potential of combining human–machine collaborations with interactive fabrication.

3 Case study—Tie a knot

3.1 Overview

This research aims to combine non-linear design with an interactive fabrication process to facilitate a human–robot cooperative workflow for assembling a complex wooden structure using rope joints. The cooperative assembly workflow consists of turn-taking tasks between two humans and two robots according to predefined rules and action sequences. A shared digital-physical workspace enables the cooperation between humans and robots, in which tracking systems are used to update the digital-physical workspace continuously. An extended reality system informs the cooperating humans about the design space and fabrication-related boundary conditions. Humans can initiate design and assembly cycles that are continued, assisted, and completed by the cooperating robots. These assembly cycles are composed of five turn-taking steps: (A) interactive design, (B) robotic assembly, (C) manual assembly, (D) rope jointing, and (E) registration of manually assembled elements (Fig. 2).

Fig. 2
figure 2

Overview cooperative assembly cycle, consisting of five main components (A) interactive design, (B) robotic assembly, (C) manual assembly, (D) rope jointing, (E) tracking of elements

Each assembly cycle consists of adding a wooden Y-triplet made of three struts, one from a previous triplet, one assembled by the robot (B), and one manually assembled by a human (C). At the beginning of each assembly cycle, users change global constraints such as growth direction and density (A). Then they move the robots within reach of the first element of the Y-triplet, which is already assembled and part of a previous Y-triplet. The second element is robotically assembled and held in place by robot one (B) until user one places the third element and closes the Y-triplet (C). The manually assembled element can be placed freely following a set of local rules that affect the design of the structure on-the-fly. After all the struts are placed, they are connected via rope joints by user two (D). After placing the joints, the manually placed element is tracked by user two and included in the digital model (E). While robot one stays in place to stabilize the structure, robot two is used to continue the building cycle in direct response to the manually assembled strut. To test the feasibility of the concept, a large-scale proof-of-concept timber structure was built with this open-ended design process (Fig. 3).

Fig. 3
figure 3

Proof-of-concept prototype to test the system's design principles and workflow

3.2 Collaboration and task distribution

A meaningful task distribution between humans, robots, and computational processes should fit the unique strengths of the cooperating agents. Based on the known set of higher level actions (i.e., planning, picking, placing, stabilizing, joining), the assembly process is formulated as a flexible task shop. The task distribution is sequence-dependent and incorporates spatial dependencies. The turn-taking task distribution presented in Fig. 4 shows the combination of human tasks assisted by follow-up robotic tasks.

Fig. 4
figure 4

Turn-taking task distribution between humans and robots

Humans perform physical tasks that are difficult for the robot, such as positioning elements that dock onto existing structures and tying knots. Humans also perform cognitive and intuitive tasks such as spontaneous design decisions and adjustments, as well as the digitization of manually placed elements. The robots perform precise spatial operations, i.e., spatially complex pick-and-place routines, as well as structural stabilizations to aid in assembling the Y-triplets as fully stable configurations, which is a difficult task for humans. A continuously updated digital model is necessary to enable a mutual distribution of tasks between humans and robots. Human actions, such as manually assembled elements, must be digitized and fed into the digital model to enable a direct reaction of the cooperating robots. We use and compare two different methods of digitizing human actions; an inertial visual object tracking method using the mobile AR device and a point-to-point localization method using the robots (refer to Sect. 3.4 for more technical details).

3.3 Material system

Timber struts: As building elements, we use timber struts with a length of 1000 mm and a radius of 20 mm. Three interconnected timber struts form a Y-triplet, and multiple interlocking Y-triplets define a reciprocal space frame (Fig. 5). The first timber strut in a Y-triplet is built-up from an already placed space frame (E1), a robot assembles the second strut (E2), and a human assembles the third strut (E3). Timber struts touching the existing context, i.e., ground or walls, are fixed with 3D-printed flexible joints.

Fig. 5
figure 5

Reciprocal space frame structure from interlocking struts, referred to as Y-triplet

Rope joints: To join the interconnected timber struts, rope joints are used. The advantages of rope joints are that they are reversible, lightweight, and flexible, allowing for a higher error margin during construction. Furthermore, the flexible connection by rope avoids the cutting and opening of holes in the material, which would weaken their cross-section. However, a rope joint connection requires a high level of dexterity in placement, making this method of joining very difficult to be carried out by a robot. Therefore, this task is assigned to be carried out by humans. In this research, we use the God's Eye rope joint (Fig. 6). This joint is typically used in basket weaving to join a pair of sticks together. We chose it because the knot can easily be converted to cover whole surface areas and can be used to define different spatial articulations. Different colors of thread were used to indicate the origins of the assembly, whether one strut was placed manually or robotically.

Fig. 6
figure 6

God's Eye rope joint

3.4 Cooperative assembly logic

As previously introduced in 3.1 Overview, the assembly logic incorporates five main turn-taking tasks distributed between humans and robots. These tasks are (A) interactive design, (B) robotic placement, (C) manual placement, (D) rope joints, and (E) registration of manually placed elements, comparing the use of (E1) a mobile AR device for automatic registration, and (E2) a robot’s measurement tip for manual registration of manually placed elements (see Fig. 7).

Fig. 7
figure 7

System walkthrough: (A) interactive design, (B) robotic, (C) manual, and (D) rope joint placement (E1) tracking with the phone, (E2) tracking with the robot

A) Interactive design: The interactive design environment builds upon algorithmic modeling methods open for user input during assembly. The computational logic of the interactive design model is based on the Assembly Information Model, which expands on a serializable network data structure available through the open-source Python-based COMPAS framework (Van Mele et al. 2017) within Rhinoceros and Grasshopper.

In the assembly model, each discrete element (strut) is stored as a node in a graph data structure. The edges of the graph represent the connections between the elements whose spatial arrangement is organized within global and local design rules. Three elements are combined into a Y-triplet featuring three connection options located on its open ends, referred to as connectors. Each connector is stored in the node's attributes as a frame, describing the position and orientation of the following triplet and a corresponding Boolean variable indicating whether the connector is closed or open. Therefore, each element in one already assembled triplet has one open and one closed connector.

At each assembly cycle, humans can interactively generate design options abiding by specific local and global design rules, influencing the growth direction and geometry of the overall structure. To define growth direction, the user freely picks a starting element in the CAD environment that is an already-built element in the structure (Fig. 8—E1). After picking the first built element, the user visualizes the corresponding two elements to complete the Y-triplet and specifies their rotation angle around the starting element. After that, the user chooses which element will be placed robotically (Fig. 8—E2) and which one manually (Fig. 8—E3). The visualized position of the third element, which will be placed manually (Fig. 8—E3), is used only as guidance. Its actual position will be chosen by the human when being manually placed and updated in hindsight through registration.

Fig. 8
figure 8

The user interface of the interactive design model provides input controls for selecting a starting element for a new triplet and defining the triplet's rotation angle via a number slider

B) Robotic placement: After the user decides on the first element of a triplet (E1) upon the preview of the computed consecutive two elements (E2, E3) in the CAD design environment, the robot is used to place the next element (E2). The robotic assembly requires the calculation of a valid and collision-free trajectory for the associated pick-and-place routine according to the current robot’s location in the workspace. We use Grasshopper and the COMPAS FAB library in combination with the MoveIT motion planner for the robot's trajectory planning. Each already placed strut, the mobile platform of the robot, and the robot manipulator are uploaded as collision objects to the planning scene, allowing for trajectory planning and taking the collision objects of the workspace into account. After computing a valid trajectory, the user sends the planned pick-and-place routine (target frames, IO control, and robot parameters) via a custom TCP/IP connection from the CAD design environment to the robot. Following, the robot picks up the wooden strut from the picking station, drives into a safe position, and then toward the target position. After the successful robotic placement of the consecutive element, the robot stabilizes the element in place, waiting for the third element to be manually placed and joined into a stable reciprocal frame configuration.

Since the robots are mobile, they can be remotely controlled by humans within the workspace and always moved to where they are needed. Therefore, after each movement, the robots must be localized in relation to the assembled structure. Reference points with known coordinates in physical and digital environments are used for their localization, which are first probed manually with the robot's measurement tip and aligned with an iterative closest-point algorithm (ICP) to estimate the robot's position.

C) Manual placement: After the second strut has been placed and the structure is stabilized by the robot, the human places the third element (E3). The aim is to complete the triplet, with respect to the local design rule, and thus close the structural triangle in the overlapping area of the three elements. When placing the third strut manually, the humans can test the ideal position for the element to stabilize the whole space frame. They can consider structural options, such as expanding the structure toward the floor or walls if needed. Furthermore, humans can interactively change spatial articulations of the structure, such as densities and openings of the space frames.

D) Rope joints: After finishing the manual placement, the second user connects the struts and places all three rope joints. During assembly and joining, a robot always stays in position to stabilize the structure until the next Y-triplet is built or equilibrium is reached. The other robot not used for stabilization is free to be used for further assembly of the structure.

E) Digitization of manual physical interventions: Before choosing the next Y-triplet, the manually placed strut needs to be registered in reference to the already built structure. The user has two options for registering the exact position of the strut.

The first (automated) option is via a custom AR-app on a mobile device. The AR-app uses the visual-inertial object tracking software by incon.ai in combination with message-passing capabilities allowing for information exchange with the CAD design environment. The tracking system uses edge detection to detect the position and orientation of the struts in relation to known geometry and pre-registered QR-Codes. For further technical details and implementations of the incon.ai software, refer to (Sandy and Buchli 2018). The message-passing capabilities are further explained in Sect. 3.5. After registration, the AR-app updates the digital model with the as-built data and adds the strut to the assembly model.

The second (manual) option is via the robot, where reference points are manually probed and used to fit the strut geometry. This registration is achieved by probing four points on the wooden struts required to define their exact position and rotation. Consecutively, the tracked location is sent to the digital model to update it with the as-built data.

After registration and syncing of the digital model with manually added elements, the user can pick the next Y-triplet to continue building and initiate a consecutive robotic action. The assembly cycles are repeated until the structure is finished (Fig. 9).

Fig. 9
figure 9

Time-lapse recordings of assembly and disassembly of the experimental prototype

3.5 System architecture

The system architecture consists of a hardware setup (Fig. 10) and a communication system (Fig. 11) that allows for interoperability between all devices used in the experiment.

Fig. 10
figure 10

Hardware setup of the system. (A) Linux computer, (B) CAD computer, (C) mobile AR-device, (D) QR-codes, (E) adjustable 3D printed feet, (F) zip-ties, (G) wooden struts, (H) wool, (I) robot 1, (J) robot 2, (K) mobile platform, (L) timber strut pick-up station, (M) pneumatic parallel gripper with custom 3D printed gripper fingers, and (N) measured-in fix points (QR codes)

Fig. 11
figure 11

Communication workflow diagram showing the system setup consisting of 1) a computer with a CAD design environment, 2) a mobile AR-app, and 3) a robotic unit

Hardware setup: The hardware setup consists of two 6-DoF cooperative robotic arms (UR10e) with custom 3D-printed pneumatic grippers. To extend the working space of the UR10e, the robots are placed on mobile platforms, allowing humans to reposition the robots manually. Each robot has a timber strut pick-up station to collect wooden elements. For the mobile AR device, we use a Google Pixel 4. The hardware used for communication between all devices includes a Linux PC and a Windows PC used to run the interactive design model.

Communication workflow: A necessary component for an augmented human–robot cooperative process is a scalable communication system for connecting multiple devices and back-end computational processes, which in this case is achieved utilizing a ROS publish-and-subscribe architecture and the rosbridge package (Crick et al. 2017). Here, the ROS system architecture connects all devices (Fig. 11), the AR-app, the interactive algorithmic model, and the MoveIT simulation. The AR-app is a custom version of the incon.ai software, providing visual inertial object tracking while also providing ROS functionalities such as the publish-and-subscribe architecture and data structure. The Linux PC runs the ROSmaster, the rosbridge server, and the MoveIT simulation. The second PC runs the interactive design model, which uses Python, COMPAS, and Grasshopper and visualizes the data structures within Rhinoceros. As depicted in Fig. 11, the computational units and the AR devices are connected via WIFI to the same ROSmaster and rosbridge server. A direct TCP/IP communication is established between the CAD environment and the robots.

At the beginning of a work session, the user uploads the initial assembly model as JSON into the CAD environment and publishes it via ROS service. The mobile AR-app subscribes to this service via the rosbridge server. Once the CAD design environment and the mobile AR-app are in sync, the uploaded assembly model is visualized on the AR-app. The user can initialize the object tracking when the assembly model aligns with the physical model. After a new element has been tracked and registered, its position and orientation are published via the ROS topic. The CAD design environment subscribes to this ROS topic and updates the digital model of the assembly with the received as-built data. In a consecutive step, the next assembly cycle can be initiated, that is, Y-triplets can be calculated and published again via a ROS service to sync the AR phone.

4 Results and limitations

We tested and validated our computational setup and assembly strategy by producing a proof-of-concept prototypical architectural structure over a period of 5 days. The floor area of the prototype was 6 × 4 m. As described in Sect.  3.4, two humans in collaboration with two mobile robots cooperatively and interactively assembled a wooden structure in turn-taking actions. The hybrid human–robot assembly process was initialized with three pre-assembled elements fixed to the ground. A total of 38 wooden struts were assembled, of which 29 were manually placed and tracked by a user. Nine were placed by the robot (Fig. 12). The design setup focused on the space frame logic due to the inherent rigidity of the triangle. Rope joints connected the different elements of the triplet. Over the period of 5 days, we placed 53 knotted joints, and the whole structure was disassembled within two hours (Fig. 9).

Fig. 12
figure 12

Screenshot of the digital model showing in blue the robotically placed elements and in yellow the manually placed elements

In this experimental study, more struts were placed manually than robotically because many connections to the walls and floors were required as "special scenario struts" to ensure structural stability. 10 of the 29 manually placed struts were registered with both the app and the robot. Not all manual struts were registered because only those used to continue the assembly robotically were tracked and included in the physical-digital model (Fig. 13). Another eight struts were registered during assembly as the structure deformed over time, and an updated version of the struts was required to continue a precise building process. The tracking discrepancy between struts registered by the AR-app and the robot was measured by comparing the registered element frame located in the center of the cylinder; the tracking discrepancy ranged between 19.74 and 78.9 mm positional difference and 2.81 degrees to 7.83 degrees rotational difference. The color gradient in the error plot (Fig. 14) indicates these deviations. The registration via visual inertial object tracking reached its technological limits due to the distinct geometry of the struts. The long and thin wooden struts were not ideal for edge-detection-based algorithms as they were only marginally constrained in one direction, which led to a shift along the strut’s axis of the digital model. The alternative registration method of the manually placed struts using the robot’s manipulator for probing reference points fulfilled the accuracy requirements. However, both methods proved to be time-consuming to use.

Fig. 13
figure 13

Different tracking results of manually placed elements: orange: tracked with the AR-capable phone, red: tracked with the robot

Fig. 14
figure 14

a The color gradient (red for translation) visualizes the deviation between the tracking results of the AR-app with those from the measurement executed with the robot. b The color gradient (blue for rotation) visualizes the deviation between the tracking results of the AR-app with those from the measurement executed with the robot

5 Discussion

5.1 Human–robot cooperation

This experimental study has explored how the combination of manual and robotic actions might open new opportunities for future crafts and lead to new workflows for human agency in robotic construction. Examples of such human agency are tasks that are difficult to be carried out by robots. In this experiment, for example, such a task refers to the joining of the wooden elements using ropes. Most joining processes in robot manufacturing have focused on systems that can be automated and avoid such complex connections. On the contrary, Tie a knot is characterized by the intentional incorporation of manual joining techniques into robotic processes, thus aiming to combine advanced robot-based methods with traditional craftsmanship knowledge. In addition, such human agency is also reinforced by the fact that the system presented is designed so flexibly that spontaneous design decisions and adjustments are made possible. The flexibility of a system to be open to spontaneous changes is particularly important for special situations, for example, when the elements have to be attached to the floor or the wall. However, a prerequisite for featuring such flexibility in a robotic construction process is the ability to continually feed human-induced changes into a digital model, here presented as the shared digital-physical workspace.

According to Shi's categorization (Shi et al., 2012), our system falls into the category of human–machine cooperation because it supports an intermediate level of human–machine collaboration. Both cooperating entities, humans and robots have the autonomy to achieve a common goal and to make use of the knowledge and skills of the other system. Both entities share the same workspace, and the human interacts directly with the robots in their workspace. The human assembles and joins the elements synchronously, while the robot holds single elements and thereby stabilizes the overall structure. Currently, the robot's position relative to the human's position cannot be detected. Therefore, the robot did not continue the next pick-and-place task until the human initiated it and moved outside the robot's workspace. Tighter and more responsive sensor systems would be needed to enable parallel task execution with humans within the workspace.

5.2 Potential for future work

Tie a knot incorporated complexity on multiple levels involving mobile robotic systems, structures that deform over time, and non-predefined design. These systems require continuous localization of the robots, tracking of built elements over more extended periods of time, and registration of manual physical interventions. Currently, only a relatively small architectural installation was fabricated with the workflow and tools developed here. Future research aims to assemble a larger scale structure to test a broader range of spatial articulations. As scale increases, more robust computational support needs to be implemented for humans to guide decision-making processes, i.e., intelligent computational processes capable of observing, predicting, and controlling quantifiable performance targets such as structural stability and robot range. Such a real-time structural analysis would be required to ensure that spontaneously made decisions are statically valid, also considering future load cases.

Furthermore, the automatic object tracking workflow and implementation needs substantial modifications and improvement. Future development needs to combine and automate the same object tracking for manually placed objects with the tracking system for locating the mobile robots in relation to the built structure. Additionally, it is critical to speed up the tracking of manual changes made to the built structure.

Regarding the AR interface, in the future, we will focus on the visualization of additional data such as design possibilities, robot reachability, and robot toolpath simulation, which will be shown overlaid on the physical world. AR could also be used in the future to inform people about the structural feasibility of currently selected options. These spatial visualizations could better support people collaborating to make more informed design decisions.

6 Conclusion

Instead of supporting a workflow that is object and end-product-oriented, Tie a knot furthers the idea that traditional craft fulfills a deep-seated human need for direct engagement with material production (Mccullough and Mclaren 1998). The workflow developed here allows for intuitive interaction and direct tacit engagement with the material and process, thus deviating significantly from linear design-to-production workflows. At the same time, back-end computational processing combined with highly precise tracking algorithms provides new possibilities for human augmentation.

While it is common to include human collaborators in semi-autonomous processes, in which the human undertakes specific tasks such as manually loading the robot's material, placing and tightening joints, or manually drilling robotically placed elements, in these processes, human interaction is not linked with a digital model and happens outside supervision. In our workflow, human interventions are used strategically for decision-making, corrections, and tacit engagement with a physical process, while still being assisted by computational logic assuring quality.

Tie a knot is a system that allows humans to negotiate the levels of task distribution and coordination and thereby reinvent the fundamental relationship between humans, skilled workers, and designers—machines and robots. Such a system reinforces human agency by increasing the social sustainability of automation, allowing humans to make decisions throughout fabrication procedures and interactively decide on task distribution. This workflow enables explicit machine intelligence (parameters, work range, structural boundary conditions) to be integrated with implicit human knowledge (creativity, intuition, fast reaction to complex situations), thus enabling a new cooperative workflow and building strategy. These cooperative strategies could be harnessed to extend robotically automated workflows to materials and construction scenarios that have resisted automation, including unpredictable and unstructured material processes or working within complex existing building structures. In such cases, humans could actively intervene, physically or cognitively, supporting or steering automated processes toward higher levels of robustness and efficiency in complex or unforeseen scenarios.