Outcomes of Ufi Project 1 – System Development

This series of posts is intended to showcase the top-level outcomes of Ufi Project 1 titled “Augmented Learning for High Dexterity Manufacture”. This project was funded by Ufi, a vocational learning charity. In this post we’ll be taking a look at how the whole system developed from it’s previous iterations. As mentioned in an earlier post there were two prior phases of what would eventually turn into the LayupRITE PIAR system.

From Left to Right: KAIL, pre-LayupRITE hardware, LayupRITE PIAR

The first stage was an early proof-of-concept of projecting interactive instructions onto the tool. The second stage was taking that concept and revising the individual elements, improving the projector, and using a newer version of Microsoft Kinect. The Ufi project allowed us to take these components and investigate ways of displaying/mounting them to produce what would become the LayupRITE PIAR system.

Physical Setup

In the left-hand image above, KAIL, the mounting solution was fairly ad-hoc, due to the short-term nature of the research project. The main downside being mounting the standard projector far away enough from the tool for the image to project over it. This necessitated the large fixturing stand shown in the above image which required sandbags to ensure it didn’t topple over, not an ideal setup for the longer-term.

The centre image is from a follow-on project intended to improve and “modernise” the KAIL system. The first difference is in using the updated version of the Kinect. The newer Kinect had a wider field of view and higher resolution depth and RGB cameras, as well as still being supported by Microsoft at the time. The other difference was that a higher-power, ultra-short throw projector was used in place of the standard long-throw version. This project was bright enough to show visible images on carbon fibre in normal clean room lighting conditions.

KAIL (L) Was only visible on glass fibre materials with the lights off. LayupRITE (R) was still visible even on carbon materials under normal clean room lighting

What was noted at this stage was that due to the short throw of the projector, steeper surfaces on parts would be in shadow. This meant that the projector had to be mounted further away from the tool, requiring new fixturing. The new mounting solution gave us the opportunity to mount other equipment, such as the PC and monitor to the pole along with the Kinect. This solution lowered the overall footprint and trailing cables and gave us the form factor for the LayupRITE PIAR system.

Software Setup

Most of the changes from KAIL to LayupRITE PIAR were in software. The previous iterations used the Windows Presentation Foundation (WPF) framework with C♯ as the scripting language. This limited the program to being 2D as the WPF is intended to make desktop apps on screens. The outlines of the instruction target sections were transformed manually by-eye to make the 2D lines conform to the tool. This meant that the software, as written at the start of the project would not work for a general case and needed changing.

Instruction targets in 2D plan view (L) transformed to match contours of tool manually (R)

What was required was a 3D environment that could better handle the collision detection and was compatible with the Kinect. For this we turned to the Unity game engine. Colleagues had had some experience of using Unity with the Kinect and VR in a related project to LayupRITE, so we felt we had enough of a basis to begin using it.

Moving to Unity

An enabling feature of the Unity platform is the “prefab”. Prefabs are building blocks of objects, scripts and other components which can be dropped into a “scene”, or program. These can then be updated in every scene or used as instances. What this means for this program is that we can drop in controls, virtual net objects, etc. This modularity can also enable us to swap out, for instance, the game “camera”, for PIAR this can be swapped to a projector-camera prefab, for another application it could be the HoloLens, or a VR headset. The ability to be modular was a major selling point for Unity for this project.

The virtual nets have warp fibres (purple) woven with weft fibres (orange) with the crossing points (nodes) represented by white circles

What Unity also allowed us to do was to make the hands tracked by the Kinect collide with the in-game representations of the composite net. The representations took the form of spheres (called “nodes” in the model) which represent the crossing points of fibres in a woven fabric. By tracking the interaction with these nodes, we can test and identify which areas of the tool have been interacted with by the user. This means that, through projection information on where to interact and when, we can guide the laminator into working in an optimal, or at least repeatable fashion.

The process for moving from the modelling environment to the projector environment followed a similar process to that of KAIL, but more streamlined:

  1. Simulate the drape of the ply
  2. Identify areas to work in and sequence (this is done by an experienced laminator)
  3. Select the nodes which represent those areas
  4. Project onto the part

Due to the 3D nature and calibrated camera-projector system no “nudging” of individual areas is required. All the above steps can be done in software, although there is still scope for streamlining and automating the steps.

Calibration and Tool Tracking

Calibration of this type (camera, projector stereo calibration) is large topic by itself, so here I’ll just mention that we were using the RoomAlive Toolkit for Unity. This is here the equivalent of KAIL’s “nudging” of the projected output came into play. Whilst the calibration was able to somewhat determine the intrinsic properties of the Kinect camera and the projector, its approximation of their relative positions and angles often required manual tweaking. This is most likely due to the relative angles of the Kinect and projector. A secondary parameter could also have been the ultra-short throw of the projector. Further work would be required to improve the overall quality of the calibration and make the process more streamlined.

A secondary feature which was implemented with limited success was in tracking the tool blocks. This meant that the tool could be moved or rotated, depending on either the user’s preference or to see projection data in shadowed areas. The OpenCV framework for Unity allowed us to use markers fixed to the tool to track its pose and location. The main issue with this was that it was difficult to determine if issues were caused by the tracking, the markers or the calibration.

Recording and Control

A goal of KAIL and this project was also to record and store what the laminator was doing, not just display instructions. To that end, since a camera was pointed at the laminator for the interactive functions, we could also record the laminators’ actions. Naturally, this recording process would be in the control of the operator. This recording of actions could in future be related to some capture of the ply outcomes and those to quality outcomes, from completed part ultrasonic scans. This data would enable us to construct a full model of how touch-level interactions can eventually lead to quality issues.

Screenshot of capture for LayupRITE PIAR showing the skeleton tracking, projected user interface and ARUco tracking markers on the tool

Controls were also to be provided by touch interaction. In a similar was to KAIL there were forward and back buttons to move through the layup stages. Additionally, there were buttons to control the recording, the image above shows the “pause” button on the right-hand side. These where projected buttons which were located on the table.

Second Screen

Another improvement from previous projects was the incorporation of a second screen. Since the application is run on a PC, adding another display (as well as the projector) was simple enough. Thus, the PC’s monitor was used to display additional information to the user. For this project it was intended more as a back-up to the projected info, but it also has the opportunity for displaying information such as where the part-in-progress will be going in a larger assembly/product. This line-of-sight to the final product is potentially a useful and important motivation factor.

Version of LayupRITE PIAR at end of Ufi Project 1

Outcomes of Ufi Project 1 – Horizon Scan

This series of posts is intended to showcase the top-level outcomes of Ufi Project 1 titled “Augmented Learning for High Dexterity Manufacture”. This project was funded by Ufi, a vocational learning charity. The main difference between this and previous works was that the focus was on skills training. Training had always been touted as an application for LayupRITE, but this was the first time it was the specific goal. This gave the project two opportunities: firstly, to further develop the LayupRITE system and secondly to get a closer look at training as an application.

Skills training and Horizon Scanning

A key difference in was required when thinking about skills training. Previously, our strong suit had been in drape simulation and working toward making unambiguous instruction sets. Going into this project we believed that we could make a series of moulds of increasing complexity and walk the learners through. However, it was explained to us that that wouldn’t necessarily do any thing for the retention of the information. This was the best explanation we were able to come up with:

IKEA do great instructions, but if you were to take those instructions away, would you be able to assemble that wardrobe tomorrow, or next week? Would you know how to assemble a similar, but different wardrobe?

We also had to understand what good looked like from a learning design standpoint and what was out there already. To achieve this, we undertook a “Horizon Scan” of the current landscape of skills training in composites, current augmented reality (AR) applications and a study of learning theory and instructional design. The top-level outcomes from these three pillars were:

Summary of each pillar of investigation in Horizon Scan

The composites training and AR applications pillars gave us encouragement that there was a space for LayupRITE to exist. There were a variety of AR applications in other industries and there appeared to be an opportunity to modernise, digitise and “smart”-ise composite laminator training.

A particularly interesting application was Soldamatic. Their system uses a welding visor/headset and torches with AR markers to better simulate the working environment. The system displays the material type overlaid onto real-world models of components to be welded. What is of interest here is how it ties in with the learning design findings, particularly “fading feedback” and “Cognitive – Associative – Autonomous”. During the course of the Soldamatic system the heads-up display in the visor displays less and less information as the user gets more experienced. This is a great example of “fading feedback” and ties in with the “Cognitive – Associative – Autonomous” approach to learning.

The Cognitive – Associative – Autonomous Model

  • Cognitive – The learner is being told what to do and must think about how to do the task
  • Associative – The learner understands what to do and can predict outcomes
  • Autonomous – The task is performed instinctively, the focus is on strategy and efficiency

7 Principles for “What good looks like”

Finally, the pillars of the Horizon Scan led us to 7 principles for what good looks like:

1. Learning outcomes and performance standards to be achieved are clearly defined

  • the precise and detailed analysis of the skills and processes, and the range and degree of difficulty of these to be covered
  • the accuracy, speed and consistency with which they need to be undertaken
  • the expected capability to transfer and adapt their application to different circumstances

2. The learning programme takes account of the stages of skills acquisition and the level of expertise of the learner

  • starting with prescriptive guidance of generic skills, moving toward the information received in a manufacturing context
  • fading feedback as learner moves through programme (and transitions through skills acquisition stages)
  • self-direction and autonomy of learning programme (will need trainer intervention/assessment as well)

3. A low risk, low-cost environment provides for relevant and deliberate practice

  • low risk – training environment lower risk than in-house training on real parts
  • low cost – attempt to simulate material (material is largest cost in training), lower cost than taking experienced staff away from production, aim to accelerate skill acquisition
  • deliberate practice – user control of programme

4. Guidance and feedback is targeted on what the learner needs to accelerate their skills acquisition and presented as simply as possible

  • “as realistic as necessary” – animations preferred over video
  • “multi-modal” – explore options for audio as well as visual feedback
  • multi-screen – use of secondary display for more detailed/ancillary information

5. The learning programme enhances and supplements, if necessary, the intrinsic motivation of adult learners

  • evidence of competence displayed to the learner (some gamification) – mastery
  • show where these skills are used (e.g., high performance auto/aero parts) – esteem/recognition
  • user-control over learning programme – autonomy

6. Evaluation is built-in from the outset and enables continuous iteration and improvement

  • Relevance – relevant evidence provided
  • Facilitation – development of effective accelerators
  • Transferability – does the training transfer into the real world?

7. Attention is given to the whole learning context, not just the technology

  • Practical issues – how the tool is set up and used
  • Learner perspectives – introduction to tool at different levels of experience etc.
  • Trainers and coaches – roles, support learners, how the tool supports them
  • Wider environment – employers, awarding bodies, product design, quality control etc.

Evolution of LayupRITE – II – PIAR

In the next few posts we will be discussing some of the hardware choices going from the LayupRITE systems on display at CAMX 2018 and Advanced Engineering 2018 to the version undergoing site trials in 2020/2021. In later series of posts we will be discussing the various software upgrades, updates and changes.

LayupRITE at CAMX
The LayupRITE stand in the awards pavilion at CAMX

LayupRITE Projected Interactive AR

The projected AR concept of LayupRITE was a development from earlier UoB research on finding improved and novel ways to display information to a laminator on a part-in-progress. The chosen method was using a projector to overlay information onto a part, a type of augmented reality. To make the system interactive it was coupled with a Microsoft Kinect. The Kinect uses both RGB and Depth cameras to track users as “skeletons”. These skeletons can be used to control the virtual projected instructions. The Kinect was also used to calibrate the projector system to align the projected information to the physical surface.

LayupRITE Alignment Image
Aligning virtual mesh to physical tool

This projected AR experience worked well for the most part. There were some issues with the calibration and alignment not being totally perfect and requiring an oftentimes lengthy setup. However, being able to physically interact with the part and projection data without requiring markers was a definite advantage. It was felt that, with some further development and possibly substituting some components the projected interactive augmented reality (PIAR) system would be an ideal platform for composites layup.

However, with the system as-was at the end of 2019 there were some drawbacks which would need to be addressed. First among which were the setup requirements, both in terms of software and hardware. On the software side, we’ve previously mentioned some of the calibration and alignment issues. The main issue was that the alignment still required manual intervention. Tool tracking was also a planned feature for further development. On the hardware side the 0.2 version required a heavy-duty tank trap and pole-mount setup, which was cumbersome to transport and setup. That said, a solution could easily be designed for a permanent, dedicated workspace.

Image shows pole and clamp mounting of projector, camera, monitor and PC
Rear view of PIAR setup

A second issue was that the interactivity got a lukewarm reception. As mentioned earlier, some substitution would have been required in the future anyway which would be an improvement on the Kinect v2. The third issue was the cost. For the key sector of colleges the cost of the PIAR system was prohibitive for a single-user workstation. This could be mitigated by using a single system with multiple tools and users.

Projected Interactive Augmented Reality (PIAR) LayupRITE System

Pros Cons
Interactive Too expensive for key customer (without modification)
Visible to everyone (unlike head-mounted displays (HMDs)) Calibration and alignment issues
Commercial-off-the-shelf components Some interactivity issues
Runs off regular PC Kinect v2 requires substitution
Cheaper than Laser Ply Projection (LPP) Not a light-weight, transportable system

All in all, the PIAR system remains a viable option for LayupRITE. There are still refinements to be made, particularly in the calibration area, but it is felt that this type of AR is probably the optimal method for layup, both training and practice (until we get Expanse-style holograms, of course!)

LayupRITE 101

Building on the successes of the “Augmented Learning for High Dexterity Manufacturing – LayupRITE” project, follow-on funding was acquired from the Ufi Voc Tech Trust. This projected, “LayupRITE 101” sought to integrate the LayupRITE methodology into an existing composites training course. Due to the LayupRITE technology being originally focussed on manual layup the “Introduction to Manual Prepreg” course run by NCC Connect was selected as a template.

Intro to Manual Prepreg and LayupRITE

This course, as the title suggests, is an introduction to the techniques and theory of laying up a component using prepreg materials. Currently the course is run over two days and split into four sessions. Two of the sessions are classroom-based theory, the other two are workshop sessions to teach the hands-on, practical skills of working with prepreg materials. It was felt that this course, both the theory and practical aspects, could be effectively digitised. The classroom content could be delivered using e-learning techniques and the workshop activities could be assisted with LayupRITE.

Benefits of LayupRITE

The aim of integrating e-learning and LayupRITE into this training course is to augment the trainer and move toward a more “blended learning” experience for the learners. This would expand the reach of the training to more learners and give trainers the opportunity to do fewer demonstrations and spend more time supervising and guiding.

The other opportunity is to use simulations to allow students to learn composite drape in a lower-risk way. Simulations would let users try unlimited approaches to a layup task without the cost or risk of handling prepreg materials. Additionally, digital simulations don’t require a workshop and so can be done anywhere, any time.

Tacit skills and workshop tasks

However, the tacit, hands-on skills developed during the workshop sessions are a crucial benefit of the course. These skills can’t be effectively replicated digitally, but use of digital tools, such as augmented reality and simulation practice, can be used to focus the workshop tasks toward getting the necessary skills.

Using digitally delivered simulations to the learners mean that they will have an on-hand reference to the task. For the instructors it will lead to less time giving general demonstrations and facilitating more individual guidance. To make this an enhancement rather than a burden, LayupRITE will have to have be simple to set up and provide enough information at the right level for the user. The previous LayupRITE project has given the team insight in how to achieve this.

Manual layup of a complex part
In-progress draping of a composite ply over a complex mould shape
LayupRITE Simulation of complex shape
LayupRITE Simulation of in-progress layup over a complex mould shape