The Path To IIoT Scalability
The services AWS cloud are created for effortless scalability but within the limits of the service. Many have automated scaling that will add resources across multiple Availability Zones or even across multiple regions as you grow.
However, scalability does affect horizontal and vertical physical processing configurations and related hardware and software services. You also need to intelligently design your IIoT platform to handle all types of scalability. Otherwise, you risk not being able to handle loads of various use cases.
Consider The Limitations Of AWS Services
Each AWS service has some flexible limits and other stricter limits. Whether you use your microservices with the auto-scaling policy on virtual machines, containers, or serverless, your workload will require physical infrastructure. This infrastructure, which is managed for you, has limitations. The flexible limits protect you from excess capacity that may not be necessary and can be increased on demand.
The strict limits are rooted in the AWS service design and can not be modified. Scaling can happen automatically for a service, but it can still require a startup time for the infrastructure to load (for replenishing another container or underlying infrastructure that the load is running on or reorganizing the load on the new infrastructure).
Be aware of all service limitations and build a solution that works within these limits.
Consider The Limitations Of The IIoT .Solution/Platform
Once the platform is ready for use, how will you cope with unexpected spikes hitting service limits? Suppose you open your platform’s APIs to third-party development. In that case, this leads to unpredictability in the actual use of your platform at any given time, and new applications may use your APIs with more traffic than your platform can handle.
This will lead you to reach your underlying AWS services (either based on hard limits or your configuration) and potentially cause disruption or data loss. Do you want to manage these situations proactively or reactively? Or do you want to make sure they don’t happen?
To avoid these situations, you could oversize your infrastructure to cover peak usage. If you cannot predict your actual needs at any given time, this can help, but it is costly.
A better way to manage capacity limits is to communicate a shared responsibility model, defining the conditions of use of the IIoT solution/platform and reporting current platform limits to developers as you measure usage – highs, lows, peaks. And daily / weekly averages.
However, unintended or unplanned use of IIoT services can still occur, so you need to be prepared to handle it.
You could enforce your IIoT service limits by prompting developers to report their capacity needs at registration. Another way to manage loads is to have your field devices send notifications when limits are about to be reached. These notifications can trigger IIoT services to allocate more capacity.
Overall, if you don’t place limits on how your platform can be used, the stability of your platform can become an issue sooner or later. The service limits AWS to reduce the potential negative impact caused by a single customer’s excessive consumption of resources on the rest of the customers.
Give The Infrastructure The Correct Size
The right size of your infrastructure goes hand in hand with managing anticipated or unexpected peak usage and with the compute capacity you release when it’s no longer needed. In the beginning, establish basic and scaling policies and fine-tune their configuration to best suit your workload behavior. The correct sizing also goes beyond production. Developers often overlook the cost of implementing the infrastructure in stages.
Do your engineers plan to develop and test around the clock? If not, then it is better to resort to more aggressive auto-scaling policies: scheduled start and stop times for your resources, or even use the on-demand implementation of your environments. As you move from development to production, you replicate costs, so try to keep them as low as possible. Establish the rules for using the different environments. Take advantage of different families and instance sizes to best suit your needs. Avoid duplicating unnecessary costs at all stages.