cool hit counter With millions of data, 10,000 players and 100 days, the inaugural AIChallenger champion is finally born_Intefrankly

With millions of data, 10,000 players and 100 days, the inaugural AIChallenger champion is finally born

Competition and research never contradict each other. Engineering and algorithms cannot be separated. Our competition is not AI theorists, nor AI engineers, it's AI Challenger.

By | | | | Qiu Lu Lu

The first 4-month AI Challenger Global AI Challenge came to an end on December 21, with the top five test sets in four of the five main tracks - image Chinese description, human skeleton key point detection, English-Chinese machine text translation, and English-Chinese machine simultaneous interpretation - competing in a defense-style final in Beijing, where the winner shared a prize pool of up to RMB 2 million.

Group photo of the award winners and the winning contestants

This is the first year of AI Challenger, yet through its massive, high-quality dataset, it has attracted the attention of more than a million people in 120 countries around the world. More than 10,000 of them from 65 countries, comprising nearly 9,000 teams, took part in the competition.

Entry statistics

Participants came from all over the world, and although the main language of the competition was Chinese, it still attracted a large number of foreign participants, accounting for 7.44%, while the number of participants from China was over 50% in Beijing, Guangdong and Shanghai.

Geographical Distribution of Players

Most of the participants are graduate students from universities and colleges who have signed up to participate as self-organized teams. There are also many strong young engineers who come to 'go it alone', compared to the lesser forces that compete as companies or labs.

The final submission rate for the competition was over 16%, which means that about 1500 teams ended up submitting their results. The AIC, as a 'competition', had a pretty good 'finish rate' in its first year.

Machine Hearts was involved in the finals throughout the day, and we have a brief tally of the composition of the teams that made it to the finals, where red represents the category of the winning team.

Finalist Team Composition

The players mostly adopted a steady strategy, and there was a degree of homogeneity in the choice of models.

For example, in the human skeleton keypoint detection track, four of the five teams directly referenced the winning methods (e.g., Faster R-CNN) from the COCO keypoint Challenge 2017 and 2016, a challenge held in conjunction with the top conference in vision, as a basis for improving the model structure, optimization methods, and other improvements to better fit the features of the data.

Another team overlaid the attention model with the model used for image segmentation by CVPR and ICCV in the last two years, gaining a large advantage in processing speed.

The teams showed amazing implementation and engineering skills, not missing a single hyperparameter or structure in the model that could be "changed and tried", and selected the one that balanced effectiveness and efficiency after a dazzling debugging session in a limited time.

Yes, 'best results' is not the only principle, 'speed' and 'less computational requirements' are also important.

Almost every player mentioned the limitation of insufficient computing power, with 2 - 4 GPUs being their common configuration, but this was clearly far from the 'ideal' computing power they had in mind. In addition, with only 87 days between the publication of the dataset and the submission of the final results, there was only a race against time to validate the validity of additional methods.

The most common question asked by the judges during the defense was "Why did you make this substitution? ", "How has this adjustment (other than improving performance in terms of results specifically) helped you? ", they are guiding players to more abstract and even more intuitive thinking - something that the entire deep learning industry and academia are working tirelessly on: trying to bridge the gulf between human and machine understanding of the problem. What exactly do machines learn in the higher dimensions that are beyond the reach of the human mind? How do we understand it?

And one of the very impressive replies came from the team whose image Chinese described the track as "mentally broken, scattered and dispersed".

The contestant flew through his presentation with a 'light touch' attitude: 'I'll leave it at that, these are the same methods as the first speaker', 'This is the algorithm of the first place winner of COCO Challenge, I just followed his code', 'Batch Normalization is a very common trick, the reason I used this is because everyone uses it'.

When the judges tried to direct him to some ideas on methodological choices, he concluded, "(Many complex ideas) I thought about, but didn't try them in depth. Because when I enter a contest, I want the ones I try to be the easiest, most common, and most effective in the limited time I have. 」

So in a rare move, instead of asking more questions the judges gave some advice to the contestant who not only didn't try to amplify the importance of her work but instead told too many 'big truths': to make it to the final by a thousand miles means you must be doing something right. So think differently about those 'tricks' that you have positioned as 'uninnovative' and 'unimportant', there must be something more essential, something that goes beyond 'tricks' into the realm of 'ideas', and that deserves to be expressed in a more formal way by a researcher.

Competition and research never contradict each other. Engineering and algorithms cannot be separated.

In an afternoon visit, Zhang Hongjiang, advisor to Today's Headlines and director of the Institute of Technology Strategy, told Machine Heart, "Our race is not for AI theorists or AI engineers, it's for AI Challenger. He mentioned that it is important for today's researchers to understand algorithms while being able to interact with data hands-on. AI experts need to understand the realities of the difficulties, and it's up to researchers to show their individual strengths as to whether to tackle these challenges with an engineering approach or theoretical breakthroughs.

AI Challenger took a year to reach and even surpass the size and quality of the worldwide benchmark dataset for scientific research in deep learning.

At ICCV this October, a team even used AIC data for joint training while participating in the COCO Keypoint Challenge, and when asked if they had used the COCO dataset in turn for pre-training or joint training of AIC, the researcher answered quickly, "No, the AIC dataset is big enough. " It also does the level of traffic, attention, and spread of an active data competition platform, and every little bit of growth in attention, especially in the number of entries, means more operational work behind the scenes for the organizing team.

Most importantly, this is not a flash-in-the-pan project.

At the award ceremony, Wang Yonggang, Vice President of Innovation Works AI Engineering Institute, talked about AIC's three-year plan as a representative of the AI Challenger organizing committee. In addition to keeping existing datasets open for the long term and allowing users to make more cross-sectional comparisons, additional datasets will enter the construction phase.

The organizing committee did an extensive dataset construction and competition direction research for people who are interested in AIC, and found that people are very much looking forward to industry data represented by finance, autonomous driving and retail, and technology direction data represented by image segmentation and pose prediction.

We will respect the opinions of cutting-edge engineers and researchers in the long-term planning of the AIC," said Wang Yonggang.

On top of that, 'AI stakeholders', from researchers, potential entrepreneurs to big companies, one and all, were named by Kai-Fu Lee, who was one of the competition chairs, talking about what they wanted them to get out of AIC and greeting them for doing something for a platform like AIC.

He hopes that large companies will soar from the technological advances of cutting each other's throats and growing together, and also calls on them to contribute their data and resources.

He hopes that students, potential entrepreneurs, are exposed to accurate, massive data at the world level before they even leave school and step out on their entrepreneurial journey, and knows where the boundaries of technology are, and calls on them to keep coming back next year, and the year after, to compete and build on the previous wave of leaders by 100 feet.

Each of the items here is difficult to implement, perhaps data sharing between departments within large companies has not yet been achieved, or fellow students and potential entrepreneurs still have to bend over backwards for GPUs, but perhaps like neural networks themselves, its utility comes from its scale and dimensionality, the great sense of accomplishment in achieving it comes from its immense variety of manipulability, and its beauty comes from its difficulty.

Machine Heart launched the "Synced Machine Intelligence Awards" 2017, hoping to record the development and progress of artificial intelligence in this year through four major awards and deliver the industry's revelatory value.

1、The trillion dollar electricity sales market is open how can power sales companies fight the market without it
2、Big new data on cheating reveals a shocking truth
3、Takes you through the installation of the best python editor
4、2018 Chinas best tourist destination cities list released Xuchang on the list
5、Mercedes to launch driverless carsharing service next year

    已推荐到看一看 和朋友分享想法
    最多200字,当前共 发送