Skip to main content

Edge AI-next step to ambient computing:  Intelligence on Smartphones and technology rebuilt around the concept of AI

 It’s not an overstatement to emphasize that the smartphone  and other

technologies industry is being transfigured by and built around the intricacies of

AI.  From the user perspective, things may run a bit smoother, without knowing the

intervention of AI.

Major handset manufacturers are investing on the UI of smartphones so that they

can inculcate the technology of AI and move further ahead in the market.

And this is only the beginning, as these mobile-appropriate microprocessor chips

doing the AI calculations will become cheaper over time and they’re likely to find

their way not just to every phone, but every connected device in the world through

a 5G connected Internet of Things. (IoT) and talks are on the move for 6G which is expected to hit the market by 2030.

Edge AI is no longer in the blueprint phase. It has entered into mainstream

adoption, and it is running at a sensational rate. It is the combination of edge

computing and edge intelligence to run machine learning tasks directly on end

devices. It consists of an in-build microprocessor and sensors, while the data

processing task is completed locally and stored at the edge node end. If we

implement Machine Learning models in Edge AI ,that will decrease the latency rate and network bandwidth.

Artificial Intelligence on the Edge 




Machine learning is a subset of artificial intelligence (AI) in which the AI is able to perform perceptive tasks within a fraction of the time it would take a human.
Edge computing refers to the act of bringing computing services physically closer to either the user or the source of the data.

There are three kinds of machine learning:supervised,unsupervised and reinforcement learning. The first kind got its name because the machine is supervised while feeding the algorithms to help it learn. 
Eg:Object Recognition

Unsupervised learning doesn't have labeled inputs and is mainly used in predictive models. 
Reinforcement learning differs from supervised learning in such a way that supervised learning has the answer key where in the model is trained with correct answer. Reinforcement learning has no answer key, and reinforcement agent has to decide what to do to perform the task. It is the science of decision making, here data is derived from machine learning systems that uses trial-and-error method. It uses algorithms that learn from outcomes to decide on what action needs to be performed next.
Main points to take away from reinforcement learning
1.Input:Input should be in an initial state from which model will      start.

2. Output:There are many possible outcomes, as there are a variety of solutions.

3.The training is based on the input. The model will return a state and based on the output user will reward or punish .
4.The model continues to learn.
5.The best solution is the one having maximum reward.
Example: Robotics for industrial automation,game of chess, adaptive controller

In a nutshell,in Supervised learning, decision is made on the initial input or input given at the beginning. Here decisions are independent of each other so labels are given to each decision.
In Reinforcement learning, it is all about making decions sequentially, output depends on present input and next output depends on previous output. Here decision is dependent and label is given to sequence.

What is Deep Learning

Deep Learning mimics how a human brain functions(hence it has neurons,networks etc similar to what human brain has).Deep neural network is inspired by neurobiology. On a higher level, biological neuron receives multiple signals through synapses contacting its dendrites and sends action potentials through axon. The complexity of input patterns are reduced by categorizing its input patterns. Deep focuses on functions higher complexity in the number of layers and units in a single layer. The two phases are training and inference and they refer to development phase versus production.When data starts increasing , ML comes to a saturating point and it is where deep learning comes into picture-Deep learning is a type of ML which is used over a large dataset.It is a ML technique to teach what comes naturally to human beings.It can recognize patterns in pictures, texts, sounds and other data to produce insights and predictions.

The building block of deep learning, the perceptron was formed in 1958 and and why Deep Learning was not used way back then and is used extensively now?The answer is DATA!!We were doing just fine with ML algorithms but as the amount of data started pouring in and with the advancement of H/W(GPU) and software the need for Deep Learning  increased for model's performance.

Beforehand, we need to understand, how a perceptron works.Its like if you understand numbers(perceptron) means, you can perform arithmetic operations(DL models).
Perceptron
It is a ype of neural network that performs computational opertation to detect business intelligence.
Intuition
If you have decided on buying a house, location, number of bedrooms,total area, distance from school or work-place.More than others, you may give prior importance to location and number of bedrooms while making a decision.And while you make an intermediate decision your brain will develop a bias that has developed over the years.Eg: You were having white bread all along and you will have a bias for selecting white over brown bread. This is how a perceptor works.


If you look at the diagram, the inputs are the factors that you take into the consideration, and the importance you give to each factor is weights and the intermediate decision is the linear combination of inputs and bias(orange node). The only difference between brain and DL perceptron is that you need to apply a non-linear make up. Non-linear means output cannot be reproduced from a linear combination of inputs(DL it is called activation functions).
This figure depicts your final decision based on your intermediate decisions.


The different non-linear functions you can apply are sigmoid,ReLU,Hyperbolic etc. 

Neural Network

Instead of one intermediate decision , we have several intermediate decisions to help in the final decision .In the DL world, hidden nodes, are intermediate decisions.These are the subset of machine learning algorithm and are referred to as artifical neural network(ANN) or simulation neural networks.(SNN), their nomenclature is similar to human brain, just like communication between organic neurons.They do not require programming with precise rules defining what to anticipate from the inputs.
For neural networks:
Four essential procedures to follow:
1.Patterns can be rememeberd through training. Computer will then match with closest match if it is available in memory.
2.Placing patterns into categories
3. Clustering to classify it without additional context.
4.Prediction, even when the relevant information is not rightly available.
Major categories of neural networks

1.Classification
Here it classified for labeled datasets for supervised learning.Eg:It can apply labels while identifying visual patterns in images.Here they tackle problems through learning.
2.Sequence learning
This uses data sequence as input or input.
3.Function approximation
Technique used to approximating an underlying function through previous or current observations.

Applying neural networks

Let us take the example of tuning the guitar. Let's say you want to tune the E chord and you tighten and loosen the knob and finally you found the right balance to hit the E chord.

Here plucking the strings are inputs and the knobs are weights.

Backpropagation

Step 1:Giving inputs and calculating the empiricial loss.(what is supposed to sound like and how it sound like)
Step 2: Based on loss, we adjust weights.(knobs).




    

The most popular types of deep neural networks. 
1. Multi-Layer Perceptrons
2.Convulutional neural networks(CNN)
3.Recurrent neural networks(RNN)

Multilayer Perceptrons
It is the most basic deep neural networks, consists of fully conncted layers. MLP machine learning method can be used to overcome high computing powerrequired by modern deep learning algorithm.
Each new layer is a set of  non-linear functions of a weighted sum of all inputed fully connected to the prior one.

Convolutional neural network

It is yet another class of deep neural networks. It is mainly used in Computer Vision. From a given series of audios or video with the help of CNNs AI learns to extract features of input and complete a specific task.eg:image classification,face authentication, and image semantic segmentation.

Different from MLP's in CNN models one or more convolution layer extract the simple features from input by executing convolution operations. Convulution is a set of mathematical operation that allows the merging of two sets of information.Here CNN is a set of non-linear functions of weighted sums at different coordinates of spatially nearby subsets of outputs from the prior layer, which allow weights to be reused.And making it the best widely popular techniques in computer vision tasks. Examples include image classification(AlexNet,VGG-Network) and object detection(Mask R-CNN,YOLO).
Recurrent neural network

It is another classification that uses sequential data feeding. It is used to avoid time series problem of sequential input data.
Her input consists of current and previous sequential samples. Therefore conncetion form a directed graph along a temporal sequence. Here each neuron owns an internal memory that keeps information from previous samples.


These are widely used in Natural Language Processing(NLP) due to superiority of processing data with input length not fixed. AI can comprehend language spoken by humans eg: natural language modelling, word embedding and machine translation.
 In RNN, each layer is a collection of non-linear functions  of weighted sums of outputs and the previous state. Thus the basic unit of RNN is called "cell" and each cell consists of layers and a series of cells that enabling the sequentialprocessing of recurrent neural network models.





Edge AI is driven by Big Data and IoT. Today, in the era of the Internet of Things
(IoT), an unprecedented volume of data generated by connected devices needs to
be collected and analyzed. This leads to the generation of large quantities of data in
real-time, which requires AI systems to make sense of data.

Edge AI- Industrial Use Cases
Edge AI  is loved by the tech world- since it is a new age sensing technology,
it has a huge ability to observe users in real-time to gain greater awareness
for taking intelligent powerful actions. Instances where you encounter  Edge
AI are while taking your smartphone which has the ability to unlock your
phone in a fraction of seconds simply by registering and recognizing your
face. Self-driving cars, is another complex example where the car drives on
its own without any human intervention. All the required data is right there
on your smart phone or your car, and in no time data will be received in the
cloud for feedback.  
Google maps are backed up by AI driven technology, it will alert you about
the traffic conditions to speech to text algorithms, yeah smart AI is
everywhere. It holds a humongous potential , as per reports AI edge device
shippings  are set to increase in 2025
 The popular AI powered edge devices include head-mounted displays, smart
speakers, mobile phones, PCs/tablets, automotive sensors, robots, security
cameras and drones that use video analytics. In addition, wearable health

sensors will see a high adaptability. Edge AI will most likely benefit
industrial-heavy applications that include supply chain and manufacturing
lines.  Particularly in the Industrial Internet of Things (IIoT), enterprises will
see a more tangible RoI. For instance, manufacturing industries could use
edge AI for predictive maintenance, troubleshooting and identifying issues
within a complex physical system. Besides, Edge AI could also be used to
automate product testing and inspection to increase the quality while
reducing resource expenditure.

Smart cameras can minimize communication with the remote servers by
streaming data aimed at the triggering event. This can also reduce the
remote processing and memory requirements. The most talked about
applications of deep learning and Edge AI include the intruder monitoring
systems to secure homes against any intervention. This holds vitally
important to safeguard homes and monitor elderly people. Text to Speech
(TTS) and Speech to Text (STT) are two examples which leverage the
applications  and Deep Learning to bring the functionalities on the Edge.
Examples include hands-free text read and write functions in automotive,
where the driver can keep attention on driving the car while interacting with
the infotainment system simultaneously. 

With the shifting of AI on the edge, brace up for a number of changes
underway. These tectonic shifts include the emergence of 5G networks,
smart devices etc and the growth and demand for IoT devices.

Deep-learning diagnoses: Edge AI detects COVID-19 from smartwatch sensors
Combining questions about a person’s health with data from smartwatch
sensors, a new app developed using research at Princeton University can
predict within minutes whether someone is infected with COVID-19.
This new breed of diagnostic tool stems from research led by Niraj Jha, a
professor of electrical and computer engineering at Princeton University.
 His team is developing artificial intelligence (AI) technology for COVID-19
detection, as well as diagnosis and monitoring of chronic conditions including
depression, bipolar disorder, schizophrenia, diabetes and sickle cell disease.

Jha’s research group at Princeton has long focused on deep learning, which is typically energy-intensive, to function on low-
power electronic devices such as phones and watches instead of centralized
cloud computing centers. 
This approach, known as edge AI, has the added benefit of helping to preserve
users’ privacy and increase security.

AI holds a key to a magnificient future ,driven by data and computers that understand our world, where in we will make more informed decisions. Apart from advantages AI can have disadvantages as well  as, like automation spurred-job loss,privacy violation, deep fakes,algorithmic bias caused by bad data, socio-economic inequality, market volatility, weapons automization.
Anyway AI's impact on society is a widely debating topic, let us concentrate on the positive side.

 






Comments

Popular posts from this blog

How to utilize you talents given by God for proclaiming and worshiping God

  How can you utilize your talents for God? If you commit your plans and talents to God , what should have been taken to accomplish in 10  months time you can set a goal in Holy Spirit and achieve in may be 1 month or a week or a day or two,may be hours or minutes. ============= How to learn with the Holy Spirit? Develop the skill of also learning by what you see and particularly by what the Holy Ghost prompts you to feel . Consciously and consistently seek to learn by what you feel. Your capacity to do so will expand through repeated practice. Significant faith and effort are required to learn by what you feel from the Spirit.  ========== Everyone has a gift – some talent or skill that God has blessed them with. So, what is yours? And how are you fulfilling the command in  1 Peter 4:10 , using it “to serve one another, as good stewards of God’s varied grace” (ESV)? Sometimes it can be hard to know where to start. That is why we’ve provided nine steps to help you glorify God with the g

Seek to be perfected by God-keep your to-do list for perfection in the hands of God

I was making my to-do list on my way to make (trying) my perfect orderly life. Then this song came across. "Seek to be perfected by God in all your ways and in your ways". Did I ask God whether these are what you required of me in my perfect to-do list. God has given human beings a vision of 20/20 meaning one can see clearly at a distance of 20ft.  To put that into perspective,eagle has the visual acuity of 20/5-meaning it can see at 20 feet what a human with 20/20 vision would need to be 5 feet away from to see. Eyes are a miracle-it is the proof of divine in itself. At times,we neglect to do what we can do for fear of perfection.Yes,it is true that "perfect is the enemy of done".It sometimes paralyzes us from doing nothing at all. What ignites Perfectionism? The pursuit of excellence which is something which we determine to do within our limited resouces and talents.  Perfectionism necessarily roots in from our desire to seek acceptance and fear of rejection. When

Meaning of Holy Mass, Being with the Holy Spirit and utilizing your skillset

  I was watching a talk by a priest and he was emphasizing how God speaks to us after receiving the Holy Communion. Many a times it occured to me I was not giving full attention during the Holy Mass.  At Mass,we gather together as friends of Jesus Christ, we listen to story of Jesus as foretold from the Holy Bible.We gather and thank God for what He did through Jesus. The mass is made of twelve key moments. 1.Greeting the celebration of Mass 2.Penitential Act-Where we tell sorry to God for not loving as we should have loved and to forgiva all our sins. 3.Following, the scripture readings by the people of parish  4. Priest reads Gospel, and explains to us in his homily. 5.The Creed is when everybody in the church stands up and says what they believe as a group. 6. The Prayer of Faithful is when we offer prayers for people in need which is led by parishioners. 7.The gifts of bread and wine are brought to the altar during "Preparation of the Gifts". 8.The consecration of bread a