Hackathon Experience with Huawei: Innovating Machine Learning for Limited Data in Radio Access Technology
Participating in a hackathon is always a test of creativity, technical knowledge, and speed. When the hackathon experience centers on radio access technology, the challenges get even steeper, especially when machine learning comes into play. Unlike areas like image or language processing that enjoy endless streams of data, radio networks operate with much less. This post walks through how the Huawei hackathon team tackled this challenge with out-of-the-box thinking, providing inspiration for anyone considering their own event or looking to understand AI problem-solving with small datasets.
Why Radio Access Technology Is Unique When It Comes to Data
Radio access technology forms the backbone of mobile communications, but the data you get from these networks is far more limited than other domains. In fields like image processing or natural language understanding, training data is everywhere. A quick look online, and you’re swimming in labeled images or sentences.
Compare that to radio networks:
- Image Processing: Billions of labeled images available online
- Language Processing: Endless text and speech examples across countless languages
- Radio Access Technology: Small, specialized datasets from simulations or deployed systems
Machine learning models thrive on data. Large, high-performing models like deep neural networks only work their magic when they can “see” a lot of different scenarios. In radio access, you’re often working with the bare minimum—sometimes just enough to get started, but never enough for the data-hungry algorithms so common in other AI fields.
This limitation shifts the focus: Instead of throwing more data at the problem, the challenge is to design machine learning methods that make the most of what little you have.
For a deeper exploration into how machine learning is being modeled in 5G radio access networks and the push to integrate AI with limited radio network data, the paper Modelling of ML-Enablers in 5G Radio Access Network dives into the topic.
The Challenge of Small Data: Rethinking Machine Learning
Classic deep learning relies on huge labeled datasets. Labeling means that for every example in your dataset—say, an image or a snippet of radio traffic—you know exactly what it is or what it means. The more labeled data, the better these systems get at spotting patterns and making decisions.
But in radio access networks, collecting and labeling enough examples is tough. Hackathon participants faced this in their challenge. They received:
- A large set of unlabeled images: These had no hints about their contents.
- A small set of labeled images: Each labeled with the right classification.
- An independent test set: This was used to score the accuracy of each solution.
The goal? Propose a machine learning solution that could make use of the unlabeled images by extracting useful information in an unsupervised way—then use those learned features alongside the labeled images to train a classifier.
This challenge forced teams to explore self-supervised and unsupervised learning strategies, which don’t need as many labeled examples. The specifics of the data setup looked like this:
- Unlabeled Data: Used for representation learning (automatic feature extraction)
- Labeled Data: Used for supervised classification
- Independent Test Data: Final performance check
With this arrangement, the hackathon experience pushed participants to stretch their AI skills and adopt strategies beyond “regular” deep learning.
If you’re planning on running your own data-focused event, the article on how to organize a hackathon offers a practical step-by-step guide.
Building an Effective Self-Supervised Learning Solution
So how did the Huawei hackathon team take on the challenge? Their approach combined self-supervised learning with careful engineering choices, balancing theory and practical tricks.
The model architecture they created had two core parts:
- Feature extractor: The team stacked multiple Restricted Boltzmann Machines (RBMs)—a type of neural network good at finding structure in unlabeled data.
- Classification layer: On top of the features, they placed a typical classifier, trained using the small labeled dataset.
To keep the model from overfitting, the team paid close attention to every detail: which parameters to use, the choice of loss functions, and especially how to account for differences between the unlabeled and labeled datasets. Sometimes, the type of images in the unlabeled dataset were quite different from those in the labeled set, which could trick the model into learning features that didn’t translate well.
Key design considerations included:
- Avoiding features that only worked on the specific unlabeled set
- Picking loss functions that promoted generalizable learning
- Fine-tuning the depth and structure of the stacked RBMs to maximize feature quality
Simple breakdown of the model:
- RBM stack: Unsupervised feature learning from all available unlabeled images
- Classifier: Supervised learning on labeled images using the learned features
If you’re interested in exploring more about applied AI for network connectivity, the article Machine learning approach of multi-RAT selection for radio access networks covers practical use cases.
The Rotation Prediction Self-Supervision Trick
One of the main hurdles was extracting actionable knowledge from unlabeled images. Here’s where the team got creative, using a clever self-supervised learning technique involving rotation prediction.
They took each image in the unlabeled dataset and rotated it by four possible angles: 0°, 90°, 180°, and 270°. The self-supervised task for the neural network? Guess which rotation had been applied to each image.
This idea may seem simple, but it works for a few reasons:
- It forces the model to pay attention to high-level semantic information in images like object identity, position, and context.
- There’s no need for labels; the rotation angle serves as a label that the team controls.
- It builds robustness: the network learns general visual concepts, not data-specific quirks.
Here’s how the rotation task works:
- Step 1: Take every unlabeled image and create four versions: one for each rotation angle.
- Step 2: Train the network to predict the rotation used for each version.
- Step 3: Use the learned features as input for your downstream classification task on the labeled set.
Predicting image rotation is a well-researched trick for teaching networks strong visual features—especially useful when labeled data is scarce. The team then added a final classifier trained with the labeled images, tying it all together.
Summary of rotation angles for the self-supervised task:
- 0° (no rotation)
- 90° (quarter turn)
- 180° (upside down)
- 270° (three-quarter turn)
The result? A model that learns meaningful, reusable features without the need for large labeled datasets.
What Set This Solution Apart
This hackathon experience was a breakthrough for the team: they’d never applied self-supervised rotation prediction before, but thanks to fresh research in recent years, they brought the strategy from paper to practice. Drawing on three research papers from the past three years, they built a three-fold solution that combines ideas from academia with real-world constraints.
The hackathon judges appreciated the novelty. The rotation prediction task was considered especially creative, pushing the boundaries of how unlabeled data can be used in challenging conditions. As one judge said, the solution “really tried to find something new” and demonstrated a willingness to experiment with modern machine learning techniques.
Not just the results, but the way the team worked—digging deep into loss functions, parameter choices, and data sources—gave them a strong foundation for future projects. They now plan to explore how this approach could fit into ongoing research, adapting it for wider use.
Standout attributes of this solution:
- Novel solution—first-time use for the team, inspired by the latest research
- Potential to reshape future research projects in radio access AI
- Strong positive review from both judges and participants
For anyone looking to bring new ideas into company projects, see this list of top corporate hackathon ideas for your team.
Why Hackathons Spark Breakthroughs in AI
The hackathon experience is unique for fostering fast-paced, practical innovation. Events like this push teams to combine cutting-edge research with get-it-done engineering, often under tight deadlines and constraints. Presenting teams with limited data, as in radio access machine learning, encourages simple solutions that actually work.
Platforms like Hackathon.com lead the way in connecting technical talent with real-world challenges, from AI to IoT and beyond.
Benefits of the hackathon experience:
- Rapid development of prototypes that can be tested immediately
- Learning from the community: see what others try, share feedback, network
- Access to new research, often before it reaches mainstream applications
- Experience with industry problems—not just theoretical exercises
If you want more detail on how to run events that produce this level of innovation, the resources at Hackathon.com can help.
Ready to Explore More? Resources for Further Learning
For anyone inspired to host or join a hackathon, or to learn more about machine learning in technology innovation, check out these helpful resources:
- Hackathon.com’s homepage for the latest events and community updates
- How to organize a hackathon—step-by-step event planning guide
- Corporate hackathon services for custom event hosting
- Hackathon masterclasses for advanced learning
- Tips and best practices for more successful hackathons
Jump in, get connected, and experience the challenge and reward of modern hackathon projects for yourself.
Whether you’re a company leader looking for innovation, an AI engineer keen to sharpen your skills, or just curious about what the hackathon experience is really like, there’s never been a better time to get involved. Experiences like Huawei’s show that sometimes, having less is the best way to inspire more.