People are divided on what the future will look like with robots in the public domain. While some see robots as humanoid creatures that steal jobs from us, many find them fascinating and dream of a future with robots.
We at KUKA have a very clear vision: robots should assist people, not the other way around. Robots can help people with their work, relieving them of unpleasant or non-ergonomic tasks. The tasks carried out by people should preferably be limited to those requiring human skills such as the ability to learn, learning from experience, sensory perception, creativity, improvisation, etc. Robots can complement these human virtues with power, repeatability, speed and quality. Smartphones and PCs are part of everyday life for all of us; accountants no longer add columns of figures manually – they use machines.
For us at KUKA, human-robot collaboration (HRC) also means responsibility. People are always the central focus – robots are there to help people and must never endanger them. To do so, robots must comply with new safety requirements. There are of course various ways to deal with these requirements. At KUKA, we draw a distinction between safety functions in place to protect people when they are working with a robot, and functions that make the robot itself safe.
How safe is the area around the robot? What tools does it use? These are questions we also have to address. We do not believe it is right to leave the safety of a robot system up to the integrators or users – our customers. We are not satisfied until the robot is safe in its particular application. We think about how close the human comes to the robot because, broadly speaking, almost all robots are capable of HRC in one way or another. There are four variants:
- The robot stops when the safety gate to the robot is opened. All that is needed for this variant is a safe input (a safety relay if necessary) that stops the robot when somebody enters the area.
- If the robot is to be manually operated, a three-position enabling switch and safe velocity monitoring is required in addition to the safe input. (In some cases, safe orientation monitoring may also be required.)
- If the use of one safe sensor can (just as safely) determine the location of the person, the robot velocity can be reduced according to the distance between the person and the robot to ensure that the robot is always at standstill when the person reaches it. For this variant, the robot needs the “safe velocity monitoring” safety function and a safe input for the Emergency Stop button. The basis for all current HRC robot systems is collision avoidance.
- The ultimate challenge and the true essence of human-machine collaboration is collision control (“power and force limitation”). This involves the robot being able to interact with the person – the person can touch the robot, guide it, and even collide with it. Using functions such as safe collision detection or safe force monitoring, it must then be ensured that forces and pressures do not exceed defined, safe limits in the event of a collision, regardless of the operation situation. Shortly spoken, that the robot does not hurt the human.
Once the robot’s application has been clarified, the question now is whether the robot provides the required safety functions. KUKA implements the following functions, for example:
- Safe collision detection
- Safe force monitoring
- Safe workspaces and protected spaces
- Safe position monitoring
- Safe velocity monitoring
- Safe inputs and outputs
- Safe tool detection
- Safe orientation monitoring
- Safe state switching to allow for changing back and forth between safety strategies within a particular application
KUKA has also had these safety functions certified in accordance with DIN EN ISO 13849 PLd Cat 3 and DIN EN 62061: SIL 2. The safety functions are built up from a range of components, such as sensors, evaluation electronics, means of communication and control. Each component contributes to the overall safety function and each in turn should be certified.
The next step is configuring the robot safely within its workspace. Are the safety functions, such as collision detection or force monitoring, effective throughout the robot’s workspace? What if the measurement accuracy is limited or non-existent at certain points within the workspace or on certain parts of the robot structure? Is safety then still ensured?
Next comes the accuracy of the force measurement in safe technology. If, for example, a force limit of 120 N is set, it is not sufficient if the robot can only reliably measure with an accuracy of 130 N. If the robot measures with an accuracy of 110 N, it may then only exert a force of 10 N, which would restrict the application scope.
The resilience of a robot should also be tested. Sensitive, safe robots are equipped with measuring technology but must nevertheless be robust. It is thus important to know how much a robot can endure and how this was tested – with no load, a partial load or a full load.
Last but not least, the robot must undergo the crash test at KUKA. This is the only way to tell whether our robots are still safe even after being overloaded or having experienced a crash. As mentioned above, people are the focus in human-robot collaboration. All these questions must therefore be resolved at the latest during the risk analysis of a particular application. This is much easier if the robot incorporates certified safety functions. The safety of the application can also be simply confirmed if the CE mark required by the Machinery Directive is present. If a CE mark is improperly awarded, there is a great risk of liability – including personal liability – in the event of damage.
The answer to the question addressed at the beginning is that robots are machines and people are responsible for their safe operation. There is no need for anyone to fear human-robot collaboration if reliable and robust safety measures are implemented.