Strictly speaking, machine-to-machine
communication is a special case of software-to-software communication. Machines have
microprocessors or microcontrollers physically connected to sensors and actuators in
various parts of the machine. Sensors (for example, a temperature sensor) collect data that
is digitized and input into the software running
on these microprocessors or microcontrollers.
This software runs some logic to figure out
what to do based on this data. This could be
sending a message to another piece of software
also running on a microprocessor or micro-controller in another machine. This software,
in turn, also runs some logic to figure out what
needs to be done with this message, which may
be sending a command to a physical actuator in
the machine to perform some physical action
(for example, shutting off a valve).
Machine-to-machine communication has
been around for decades. However, it has
recently achieved renewed relevancy thanks
to the emergence of Io T and Industry 4.0
scenarios. The difference is that, in the past,
this communication was largely proprietary
and closed (i.e., involving machines from the
same vendor, communicating across closed
networks, through proprietary protocols).
Today, the push is for open, multi-vendor,
Internet-based, standards-based, plug-and-play machine-to-machine communication.
Cloud platforms, again, are very useful for
enabling this modern type of machine-to-machine communication. They do so by providing Io T platforms that sit on top of their
messaging platforms and can handle the potentially huge amount of messages sent by
devices, as well as help manage the registration, management, provisioning, security,
and monitoring of devices.
As just mentioned, Io T devices normally
generate large amounts of data. The challenge now is how to extract actionable insight
from this data. This is where -- you guessed it
-- the topic of big data comes in. In big data
scenarios, the data is coming at such volume,
variety, and velocity (the 3 Vs of big data)
that it renders traditional data processing
and storage mechanisms inadequate. In
order to deal with this, data is stored in its
native format – structured, unstructured,
or anything in between – in data lakes that
hold the raw data so that you can analyze it
or transform and move it later.
For instance, you could use a data lake to
store all the data that you get from your Io T
devices that are collecting temperature data.
You can leave the data in the store and then
filter through it and create a view of the data
per hour or per week.
There are two important and related
trends that are particularly interesting for
manufacturing Io T scenarios: hybrid clouds
and edge computing.
Hybrid clouds are deployments where you
have a combination of the cloud running in
tions has achieved renewed
relevancy thanks to the emergence
of Io T and Industry 4.0 scenarios.