The concept of digital is all around us these days. Everywhere you look, there are blogs/news stories about Industry 4.0, and Digital Transformation projects which all focus on the interoperability of machines.
New technologies such as Blockchain, invented for the cryptocurrency Bitcoin, artificial intelligence and the Internet of Things all refer to this interoperability. Sure, this is important as the more connected our data becomes the more information rich our experience is. But, where does the human fit into all of this; are we at risk of being overtaken by these ever increasingly “smart machines”.
A machine can consume vast quantities of data in a very small amount of time. With computer processors, memory and storage now vastly greater and faster than just a few years ago, the amount of computing power available is immense. As are the algorithms that power these things. Even ads that we see are targeted. How often do you have a conversation about something, for an ad to appear in your LinkedIn or Facebook news feed offering you a product or a solution to something you had discussed?
Machines are getting very smart, but are they smarter than us? I don’t think so; they are a different type of smart… They are still doing what they are told, by a human.
But, does this mean that the human is now disadvantaged? Unlike a computer, you cannot simply stick some more RAM or an SSD (Solid-State Disk) hard drive into your average person! How can we make it easier for us to consume complex information?
Firstly, we need to understand that raw data is not particularly useful. For example, having some geometry described to you orally would not allow the average person to be able to understand it. When you consider that 80-90% of information is received through sight, you start to appreciate that the way that we consume information is very different. A computer would be able to understand some described vectors, however without context this data does not add any real value to us.
Secondly, we require information, that is data with context. For example, some geometry without measurements is just data. Adding some measurement helps, but for it to be really understood, seeing it at full scale allows any obscurities to be resolved quickly. Complexities in interfaces, particularly with other assemblies/objects, are far more easily understood when the information is presented at full scale and in context. It is unnatural to ask our brains to process 3D information with this context missing.
Shrinking information on to a 2-Dimensional display (a computer monitor) creates an unnecessary load on our brain. So, in order not to decrease the productivity of your digitally enhanced manufacturing process, and to keep up with the machines, you need to “digitally enhance your workforce”. But, how do can we do this?
We have recently published a white paper that explores this in more detail. If you would like a copy, then please click here.
Theorem XR allows you to bridge the cognitive gap, to fully exploit your existing 3D CAD and to digitally enhance your workforce.