TensorFlow is an open source software library that uses data flow graphs for numerical computation. Where Tensor represents the passed data as a tensor (multidimensional array) and Flow represents the computational graph used to perform the operation. Data flow graphs describe mathematical operations in terms of directed graphs consisting of "nodes" and "edges". A "node" is generally used to represent an imposed mathematical operation, but can also represent the beginning of data input and the end of output, or the end of reading/writing persistent variables. The edges represent the input/output relationships between nodes. These data edges can transmit multidimensional arrays of data whose dimensionality can be dynamically adjusted, i.e., tensor.
TensorFlow has maintained its position as the top library for 'deep learning/machine learning' since its official release. The Google Brain team and the machine learning community have also been actively contributing and keeping up to date with the latest developments, especially in the area of deep learning.
TensorFlow started as an open source software library for numerical computation using data flow graphs, but as of now it has become a complete framework for building deep learning models. It currently supports mainly TensorFlow, but also languages such as C, C++ and Java. In addition, this November Google finally released a developer preview version of its new tool, a lightweight solution for TensorFlow for mobile and embedded devices.
2. TuriCreate: a simplified machine learning library
TuriCreate is an open source project recently contributed by Apple that provides easy-to-use methods for creating and deploying machine learning models for complex tasks such as target detection, human pose recognition, and recommender systems.
We as machine learning enthusiasts will probably be familiar with GraphLab Create, a very easy and efficient machine learning library, and the company that created it, TuriCreate, caused a big backlash when it was acquired by Apple.
TuriCreate is developed for Python, and its strongest feature is the deployment of machine learning models into Core ML for developing apps for iOS, macOS, watchOS, and tvOS.
OpenPose is a multi-person keypoint detection library that helps us to detect the position of a person in an image or video in real time. The OpenPose software library, developed and maintained by CMU's Perceptual Computing Lab, is a great case study for illustrating how open source research can be rapidly applied for deployment into industry.
One use case for OpenPose is to help solve the activity detection problem, where an actor's completed movement or activity can be captured in real time. These keypoints and their movements can then be used to create an animated film. Not only does OpenPose have a C++ API to give developers quick access to it, but it also has a simple command line interface for working with images or video.
DeepSpeech is an open source implementation library developed by Baidu, which provides the current top speech-to-text synthesis technology. It's based on TensorFlow and Python, but can also be bound to NodeJS or run using the command line.
Mozilla It has been the construction of DeepSpeech harmony open source software library The main research strength of the，Mozilla Vice President, Technology Strategy Sean White In a blog post, it was written：「 Only a few commercial quality speech recognition engines are currently open source， They are mostly dominated by large companies。 This reduces the number of startups、 Researchers and traditional businesses customize specific products and services for their users。 But we have worked with many developers and researchers in the machine learning community to refine the open source library， So for now DeepSpeech Sophisticated and cutting-edge machine learning techniques have been used to create the speech-to-text engine。」
5. Mobile Deep Learning
The repo ports the best current techniques in data science to a mobile platform. The repo was developed by Baidu Research Institute to deploy deep learning models to mobile devices (such as Android and IOS) with low complexity and high speed.
The repo explains a simple use case, namely target detection. It can identify the exact location of a target (e.g. a phone in an image), great isn't it?
Visdom supports the propagation of diagrams, images and text between collaborators. You can organize the visualization space programmatically, or create dashboards for real-time data, examine experiment results, or debug experiment code through the UI.
The inputs in the plot function change, although most of the inputs are a tensor X of data (rather than the data itself) and an (optional) tensor Y (containing optional data variables, such as labels or timestamps). It supports all basic chart types to create Plotly supported visualizations.
Visdom supports the use of PyTorch and Numpy.
7. Deep Photo Style Transfer
The repo is based on the recent paper Deep Photo Style Transfer, which introduces a deep learning method for photographic style migration that can process large amounts of image content while efficiently migrating reference styles. The method successfully overcomes distortion and meets the need to migrate photographic styles in a large number of scenarios, including time of day, weather, season, and artistic editing scenarios.
CycleGAN is an interesting and powerful library that shows the potential of that top technology. As an example, the following figure roughly demonstrates the library's capability: adjusting the depth of field of an image. The interesting point here is that you don't tell the algorithm beforehand which part of the image to pay attention to. The algorithm did it completely on its own!
The library is currently written in Lua, but it can also be used from the command line.
Seq2seq was originally built for machine translation, but has been developed for a variety of other tasks, including summary generation, dialogue modeling, and image capture. The Seq2seq framework can be used as long as a problem is structured to encode input data into one format and decode it into another. It uses all the popular Python based TensorFlow libraries for programming.
This deep learning project is very exciting in that it tries to automatically generate code for a given GUI. When building a website or mobile device interface, usually front-end engineers have to write a lot of boring code, which is time-consuming and inefficient. This prevents developers from spending major time on implementing real functionality and software logic. The aim of Pix2code is to overcome this difficulty by automating the process. It is based on a new approach that allows to generate computer token with a single GUI screenshot as input.
Pix2code is written in Python and converts captured images from mobile devices and web interfaces into code.
(Source: Heart of the Machine)
Want to know what the latest research is in AI plus education?
Want more dryness in AI?
Want to learn more about the "smart view" of the experts?
Please click "Find" on the dialogue screen to get the content you want.
Someone asked: why are you doing Smart View? Why focus on AI + Education?
I don't think that's an option.
When AI started disrupting industries, we first thought of education. The future is what we are working for, and education can impact the present and change the future.
We observe and record the metamorphosis of education in this era with care; we strive to bring to you cutting-edge developments, the latest advances, and advanced perspectives, and hope to think and explore with you. We believe that education that grows AI wings will create endless possibilities.
1、Make the graph beautiful with Python dress the bar graph 2、Come come gather round for the CES Future Show 3、Introduction to PBL and Getting Started2 4、The Essential Path to Good Programming System Design Opening
5、AI Smart Learning is here A Sichuan higher education institution has been selected as a pilot school for the Artificial Intelligence Academy project