A deep learning neural network for manufacturing should be exposed to as much variation as possible, including different hours and days of production, mix of random dates, environmental factors like changing ambient light, multiple production sites. Failing to account for these changes in training data can lead to reduced accuracy, says Donato Montanari at Zebra Technologies.
The number of automotive-related manufacturing plants in Europe is growing, with around 322 sites in 2022, including 38 electric battery plants, compared to 301 the previous year, according to research from the European Automobile Manufacturers Association.
And a few months ago, we read that major electric battery manufacturers from Scandinavia and Asia are planning investments totalling around 10 billion euros in new European gigafactories.
New facilities and the modernisation of existing sites to support electric vehicle production are prime opportunities to rethink tooling and processes to maximise efficiency, quality, and labour. News outlets reported that an electric-vehicle maker has removed over one hundred steps from its battery-making process, 52 pieces of equipment from the body shop and over 500 parts from the design of its flagship vehicles.
The result of rethinking its processes has been a 35% reduction in the cost of materials for vans and savings of similar scale for its other vehicles.
We know that when it comes to developing new, existing factories, and procuring solutions, there is a site level focus with input and sign-off shared at site and corporate level.
But there is always the possibility of different sites using different solutions for similar workflows, and the risk of expertise and data not being shared across sites, including when using newer AI-powered solutions where data quality is essential. This can also be true for visual inspection teams using machine vision systems for quality and compliance.
Among machine vision leaders in the automotive industry, almost 20% in Germany and the UK say their Artificial Intelligence machine vision could be working better or doing more, according to a Zebra report looking at AI machine vision in the automotive industry.
Are there ways that technologies like deep learning machine vision could be better deployed and used? Could we balance discussions about cloud security and governance with opportunities to leverage it for high value workflows like testing and quality control with deep learning machine vision, and new computing and collaboration resources for engineers and data scientists?
AI, particularly deep learning, thrives on data – volume, variety and velocity of superior quality data is key to training and testing deep learning models, so they deliver the outcomes expected when deployed in real life.
Experience and time available can vary between teams and sites, which can create silos and make achieving data quality more challenging. Data needs to be stored, annotated, and used for training models, with other data sets needed for model testing. It makes no sense for company data in these cases to remain siloed, to the detriment of better training for machine vision models.
A deep learning neural network should be exposed to as much variation as possible, including different hours and days of production. A mix of random dates in the dataset is needed which may be inconvenient as it requires data capture over a period, unless using a platform for simulating training data, but it is crucial for training a robust model.
Industrial processes are also subject to various environmental factors, such as changing ambient light, materials with slight variations, vibrations, noise, temperatures, and alterations in production conditions. Failing to account for these changes in your training data can lead to reduced model accuracy.
Each site may introduce variations in sharpness, working distance, ambient light, and other factors that the model will learn to manage, so training datasets reflect the full range of variations that the model may encounter in real-world scenarios. If industrial processes involve multiple production sites, it is a mistake to collect data from only one of them or collect from all of them but keep the data siloed.
To fix this, data should be captured and shared from different environmental conditions and production sites, but how?
Another issue with a siloed site approach concerns the annotation of training data for deep learning models. Inaccurate, unclear, and inconsistent annotations inevitably lead to models that do not perform well. It is critical to ensure annotations are precise and unambiguous including across production sites making the same items, but this requires teams to be able to collaborate on annotation projects.
Marking different defect types on different images while leaving some defects not marked at all is a common mistake in real-world projects. And what counts as a defect can also be subjective, so cross-validation is important. All defects, regardless of type, should be clearly marked on all relevant images.
Again, without taking a unified approach, and leveraging the cloud, the challenge of data annotation among sites and countries remains.
Machine vision teams across manufacturing industries need new ways to leverage deep learning machine vision, which should include using the cloud. A cloud-based machine vision platform would allow teams to securely upload, label, and annotate data from multiple manufacturing locations across site, country, and region.
A larger, more diverse range of pooled data in a cloud-based platform from across sites and environments is better for deep learning training. Such a platform would allow defined users to work together in real time, collaborate on annotation, training, and testing projects, and share their expertise.
With a cloud-based platform, users with defined roles, rights and responsibilities could train and test deep learning models in the cloud. Powered by much better training and testing data, they may deliver much higher levels of visual inspection analysis and accuracy beyond conventional, rules-based machine vision for certain use cases.
These outcomes are sought by manufacturers in the automotive, electric battery, semiconductor, electronics, and packaging industries, to name a few.
A cloud-based solution also delivers scalability and accessibility of computing power. With traditional systems, some select employees get strong GPU cards in their computers to perform large trainings. With the cloud, every user can access the same high computing power from their laptops.
Some costs are generated, but through a pay-as-you-go subscription model, it may still be more beneficial than investing in a company’s own servers and additional hard-to-find IT personnel.
A software as a service model would give machine vision teams the flexibility and ease of investing in a cloud-based platform with a subscription while the technology partner seamlessly adds new features, models, and updates. Deep learning cloud-based platforms will allow for model edge deployment on PCs and devices to support flexible, digitised workflows on the production line, on a PC or device wherever a user or team is located.
Manufacturing leaders expect AI to drive growth. This surge in AI adoption, combined with leaders prioritising digital transformation, underscores manufacturers’ intent to improve data management and leverage modern technologies that enhance visibility and quality throughout the manufacturing process.
One of today’s most significant quality management issues is integrating data. With AI and data goals and new automotive plants planned, the time is ripe to look at the potential of the cloud to leverage data and extend the benefits of deep learning machine vision.