Being daring, with technology both professionally and at home, is moving my practical use of the technology forward at a rather high rate. From a theoretical point of view I am intrigued at the rate that the AEC industry could progress if technology was a primary focus, like it is for many of us at home. I am trying to keep up with some of the emerging technologies out there and I can’t help but to always end up thinking “How can I make BIM better with this?” As a result I have looked at the advancement in technologies such as Augmented Reality (AR) and Gesture Control and how, in theory, it could help in the BIM process. Naturally the mind is drifting in these situations and BIM Level 3 processes and collaboration comes to mind. Could technology help in the creation of its definitions?

AR is a fantastically fun and useful technology. But it only last for a few minutes or so, as you either find the restaurant you are looking for on the AR app or often by other traditional means or you are bored after the third rabbit jumps past the tablet screen. There are some cool marketing applications for AR, and 3D modelling is made for these services, but again it’s a luxury that no-one seems to have time to make use of.

What if the AR was part of the day-to-day work rather than specialist applications? Like the computer monitor is today and what if it improved on how you developed the 3D model rather than just presenting its current stage? You can already interrogate buildings for serviceable components using a tablet but what if you could step in to the space and control it physically? This is where holographic technology and gesture control comes in to the equation.

Combining holographic AR with gesture control could be the future in how buildings are managed but also how they are designed and monitored during the construction phase. Just using Kinect and comparing the progress in functionality between Xbox360 and Xbox One is amazing and the parallel to design development and collaborations could easily be drawn: Room depth, limb angle and speed calculations and facial recognition of up to 6 people that control pre-set parameters just to name a few. OK, this is only for controlling you home entertainment system but what if the AEC industry demanded the technology to be developed for its purpose?

Imagine a BIM project where teams from multiple locations can share a holographic AR space and develop model components in 1:1 with team specific security permissions, based on the people attending the collaboration meeting. Erm, amazing!

OK, from a realistic and practical perspective i.e. being a professional, project needs, change management principles and the time frame for cultural evolution, as well as the technology actually working, play a major role in adapting to new ways of collaborating and developing. But looking at early stage possibilities from a technical point of view, applied to the current BIM Level 2 process, allows us look forward to new possibilities and perhaps satisfy our tech geek itch a little bit too.

With this backdrop of eager wants and curious forward thinking, let’s look at what Microsoft’s HoloLens and Google’s Project Soli could do for BIM projects:

The Microsoft HoloLens is a device that carries 3 processors, a CPU (Central Processing Unit) a GPU (Graphical Processing Unit) and a HPU (Holographic Processing Unit). This makes it possible to map the holographic environment we are stepping in to with spatial sound and gesture control and without any need for external cameras, wires, phones or computer connection.

The use of standard (CAD) commands such as copy, rotate, mirror, join (actual current command is “Glue”) and obviously moving, placing, launching tools etc. are voice, motion and gaze controlled.

When the user launch the application from the AR tool provided by the HoloLens environment, the User Interface is surrounding the user and the space becomes the design area.

Naturally, and without going in to details on unified communications, it is easy to see how communication and collaboration on project development could benefit from adapting to this technology. Hint, see roles and responsibilities contact list in the BEP.

The Google Project Soli is a motion tracking technology that use radar to map gestures or “human intent” at an extremely high frequency (10,000 frames per second). The technology hardware is a chip small enough to fit in to for example hand held devices. It works through other materials and there are no moving parts and no lenses.

Inserting this technology in to devises or environments where development and design take place could enable the user to control tool settings, make selections from dropdown lists, handle tools and functions with an accuracy not possible with gesture control before Project Soli.

Applied Technology

Combining these two technologies with the BIM process and its associated technologies could be significant when the industry is gearing up for a practical Level 3 BIM process. What we seem to be striving to as a BIM collective is to tap in to a model from anywhere at any time and have instant access to voice and video communication and the entire project data resource. We want a model that is always up-to-date and if we make any changes we want the rest of the team to get notified and see it live. In that environment we want access to the standards and rules that control the design development as well as the support mechanism and team that keep the project and the team technically healthy. Not too much to ask for if you consider that with the HoloLens the team assigned to a specific task could access a hosted holographic AR space having all the applications, communication and CDE with in arms and gaze reach. Each available parameter could be voice and motion controlled e.g. say “Export IFC” while controlling the UI and any scroll and selection requirements between your thumb and index finger. As for support, there are already pretty good remote access services available and there is nothing to say that application share or desktop share couldn’t be a holographic share.

How this could actually work is not clear, yet, but could Windows Azure technology be what ties the HoloLens collaborators together? And there are interesting connections between the Azure Active Directory and Windows 10 (Windows 10 being the platform for the HoloLens); single sign on anyone?

However, these are questions for the (near future). Looking at what this could do for the 2016 Level 2 BIM Government initiative, in theory, may look something like this:

The EIR, BEP and other Standards

The development and sharing of the EIR (Employers Information Requirements), BEP (BIM Execution Plan) and other Standards document with the design team will probably not require any AR. But the way we are accessing and using multiple documents, developing component libraries and accessing cloud based project environments, including the admin data, standards and the Common Data Environment would be improved as the user chooses to upload documents, models, etc. anywhere in the field of vision i.e. outside the limitations of the screen size. This must surely be a good thing?

Developing the Federated Model and its Components

Developing client and project stage specific components e.g. a window or a chair could be done at 1:1 in a room where you could move about or at a scale for larger objects sitting right in front of you on your desk e.g. steel frame. With the advanced gesture control utilized by the Project Soli chip Google is developing. Defining details and handling tools and functions in the BIM applications would be as accurate as using a mouse. However, there is a learning curve for this. Just adapting to controlling my TV via voice command on the Xbox One and Kinect is taking some adjustment. I would predict that “Draw on Solid” in AECOsim or model a slanted curved roof in Revit would require some adapted dexterity.

Once the content library is created the user could access it from where he/she chooses to place it in the space making up the UI. Scanning through the content, take it out and scale it and look at details to make an informed choice before loading it in to the project.

We could physically walk around or view a part of an assembly model to set up the drawings and make comments while hosting or taking part in a collaboration meeting with voice and video or validating the information for graphical suitability while pulling out and checking the metadata list using gesture control functions for each component. This must be the Facilities Managements dream and a step further than using the AR function on a tablet.

Deliverables are obviously the whole point of the project and patch processing is just another tool controlled as normal but 3D printing becomes more integrated in to the environment by allowing the designer to easily transform the holographic design right in front of them to a physical object.

Even if this is emerging technology and realistically not applicable today it is interesting to look at and above all be prepared for what the future may hold.

If you are looking to Evolve with BIM. Please get in touch.






Photo by davide ragusa on Unsplash