cool hit counter Ubiquity Sydney AI Institute AAAI 2018 Top Session Papers at a Glance_Intefrankly

Ubiquity Sydney AI Institute AAAI 2018 Top Session Papers at a Glance


AI Technology Review Press.The 32nd top conference on artificial intelligence, AAAI 2018, was held in New Orleans, USA. At this year's AAAI, Ubiquity Sydney AI Institute had 5 papers selected, 3 oral and 2 poster.

Ubiquity Sydney AI Institute has been listed as an "Active Company in the AI Impact Factor Paper Category" with 4 CVPR accepted papers in the AI Impact Factor, a database project launched by AI Technology Review. Ubiquity Sydney AI Institute will also be doing a live sharing of the paper interpretation at the AI Catechism Academy in May, so stay tuned.

essays 1:Domain Generalization via Conditional Invariant Representation

In order to generalize the model learned from the data in the source domain to some future target domain, our approach wants to learn domain-invariant features. Previous domain adaptive methods have learned domain invariant features by matching the edge distribution P(X) of the features, but this approach assumes that P(Y| X) is stable and constant across domains, which is difficult to guarantee in reality. We propose to ensure that the joint distribution P(X,Y) is the same across domains by matching the conditional probability P(X|Y) and simultaneously measuring the change in P(Y). The conditional domain invariant features are learned by two loss functions, one measuring the difference in class-conditional distributions and one measuring the difference in class-normalized marginal probability distributions, to match the joint distribution. If the P(Y) of the target domain does not vary much, then we are guaranteed to get a good match to the features of the target domain.

essays 2:Adversarial Learning of Portable Student Networks

Methods for learning deep neural networks with fewer parameters are urgently needed, as the large storage and computational requirements of heavy neural networks largely prevent their widespread use on mobile devices. Training lightweight networks using the model of a teacher network-student network learning framework is a more flexible approach than algorithms that directly remove weights or convolutional kernels to obtain relatively large compression and acceleration ratios. However, in practice, it is difficult to determine which metric to utilize to select useful information from the teacher network. To overcome this challenge, we propose to use a generative adversarial network to learn a lightweight student neural network; specifically, the generator network is a student neural network with very few weight parameters, and the discriminator network is used as a helper to distinguish between features generated by the student neural network and the teacher neural network. By simultaneously optimizing the generator network and the discriminator network, the student neural network generated in this paper can generate features with the same distribution as the teacher neural network features for the input data.

essays 3: Reinforced Multi-label Image Classification by Exploring Curriculum

Humans and animals learn organized knowledge more efficiently than cluttered knowledge. Based on the mechanism of curriculum learning, we propose a method to strengthen multi-label classification to simulate the process of human from easy to difficult to predict labeling. This method allows an intensive learning intelligence to make label predictions sequentially based on the characteristics of the image and the predicted labels. In turn, it obtains the optimal strategy by looking for a way to maximize the cumulative reward, thus making the classification of many label images the most accurate. Our experiments on the PASCAL VOC2007 and PASCAL VOC2012 datasets demonstrate the necessity and effectiveness of this enhanced multi-label image classification approach in a real multi-label task.

essays 4: Learning with Single-Teacher Multi-Student

This paper investigates how a single complex generic model can be used to learn a series of lightweight specialized models, namely the Single-Teacher Multi-Student (STM) problem. Taking classical multiclassification and biclassification as an example, this paper revolves around how a pre-trained multiclassification model can be used to derive multiple biclassification models, where each biclassification model corresponds to a different class. In real-world scenarios, many problems can be viewed in this context; for example, making fast and accurate judgments based on a generic face recognition system for a specific suspect. However, direct inference using multi-classification models for dichotomous operations is inefficient, and training a dichotomous classifier from scratch often results in poor classification performance. In this paper, a gated support vector machine (gated SVM) model is proposed by considering the multi-classifier as the teacher and the target's binary classifier as the student. Each biclassifier in this model can give its own prediction in combination with the inferred results of the multiclassifier; moreover, each student can obtain the sample complexity measure given by the teacher's model, making the training process more adaptive. In practical experiments, the proposed model has achieved good results.

essays 5: Sequence-to-Sequence Learning via Shared Latent Representation

Inspired by the fact that the human brain can learn and express the same abstract concept from different modalities, this paper proposes a generic star-shaped framework to implement sequence-to-sequence learning. The model encodes the content of different modalities (peripheral nodes) into the shared latent representation (SLR), i.e., the central node. The mode-invariant property of SLR can be thought of as a high-level regularization of intermediate vectors, forcing it to not only capture an implicit representation of each individual modality (as in an autoencoder), but also to transform it like a mapping model. Thus, we can learn SLR from single or multiple modalities and generate modal information that is the same (e.g., sentence-to-sentence) or different (video-to-sentence). The star structure separates the input from the output, providing a general and flexible framework for a variety of sequence learning applications. In addition, the SLR model is content-specific, which means it only needs to be trained once on the dataset and can be used for different tasks.

By the way, we're hiring. Know anything about that?

CCF-GAIR (CCF Global Summit on Artificial Intelligence and Robotics)

It will be back at the end of June in Pengcheng

11 sharing sessions over 3 consecutive days

6.29 - 7.1, we'll meet on time!


Recommended>>
1、An overview of the development of the global smart driving industry over the next 5 years
2、Blockchain Development I Private Network Construction
3、Myoelectric deep learning algorithm making a machine version of finger dancing possible
4、Five technological innovations to make your building project smarter
5、iPhone downgrade access on and off Apples servers hacked

    已推荐到看一看 和朋友分享想法
    最多200字,当前共 发送

    已发送

    朋友将在看一看看到

    确定
    分享你的想法...
    取消

    分享想法到看一看

    确定
    最多200字,当前共

    发送中

    网络异常,请稍后重试

    微信扫一扫
    关注该公众号