Academic Paper Summary

 



Artificial intelligence (AI) is the study of "intelligent agents" in computer science. A machine can simulate cognitive processes like learning and problem-solving that people typically identify with human brains. One of the long-term objectives of the field is general intelligence. Since the 1980s, artificial intelligence (AI) has been on the increase, but until 2012, it was only found in commercial and university research labs. Large volumes of data and the widespread availability of faster, more potent computers are major contributing factors. An electronic circuit known as a graphics processing unit (GPU) is specialised and created to quickly access, modify, and produce images from memory. They are utilised in game consoles, personal computers, workstations, smartphones, and embedded devices. Modern GPUs are excellent at handling image processing and computer graphics. The most cutting-edge neural networks are currently powered by graphics cards. They provide computational power 10 to 100 times more than conventional CPUs. When compared to a conventional CPU, the fastest GPUs can fetch up to 750GB/s, which is enormous.

The study of the development of algorithms that can learn from and make predictions based on data is known as machine learning. Arthur Samuel first used the phrase "machine learning" in 1959 to refer to how computers can learn without being explicitly programmed. It is closely related to computational statistics, which focuses on making predictions using computers, and frequently overlaps with it. Two further methods are semis-upervised and reinforcement learning, which are also occasionally utilised. Robotics, video games, and navigation frequently use reinforcement learning. The agent, environment, and actions are the three main components of this sort of learning. When the cost of labelling is too high, semi-supervised learning is beneficial. For training, it makes use of both labelled and unlabeled data. Artificial neural networks and similar machine learning techniques with more than one hidden layer are the subject of deep learning research. These nets extract and transform features using a cascade of numerous layers of nonlinear processing units. They pick up various levels of representation that line up with various degrees of abstraction. Convolutional neural networks (CNNs) are made up of many receptive field layers. They are widely used in natural language processing, recommender systems, and image and video recognition. The usage of shared weight in convolutional layers is one of the main benefits of networks (Ongsulee, 2017).





References

Ongsulee, P. (2017) “Artificial Intelligence, Machine Learning and deep learning,” 2017 15th International Conference on ICT and Knowledge Engineering (ICT&KE) [Preprint]. Available at: https://doi.org/10.1109/ictke.2017.8259629.

Comments