Inside DevOps: Red Hat Certified Architect Talks About His Projects and Industry Best Practices

Vadim Timonin
Vadim Timonin

DevOps is one of the key trends in IT in recent years. This approach brings together developers and IT specialists to automate processes in order to accelerate software release, improve its quality, and manage infrastructure more efficiently. In 2023, the global DevOps market was estimated to be worth about $11bn, and by 2032, against the backdrop of steadily growing demand, it could grow almost sixfold.

Vadim Timonin—a leading IT specialist at the international level, holder of numerous certifications such as Red Hat, Amazon, Microsoft, and Google, who has implemented projects in a variety of industries—works in this very area.

In this interview, Vadim shares his leading projects and best practices in DevOps.

Vadim, tell us what you work with and what tasks you have to solve.
The range of tasks is very wide. It includes infrastructure planning and deployment, selection of suitable technologies, and deployment of fault-tolerant clusters based on Kubernetes. I am also involved in migrating between cloud providers such as AWS, Azure, and Google Cloud, ensuring uptime and minimizing downtime during migration. In addition, my responsibilities include organizing Continuous Integration/Continuous Delivery processes to automate the build, testing and deployment of applications, as well as individual complex projects.

Can you give an example of such projects?
One of the recent projects at Digital IQ was a large-scale migration of applications and services of our client, Thryv, from AWS cloud provider to the company's own data centre.

How did you implement this project, what challenges did you face, and what results did you achieve?
Firstly, we had to organize a pool of Kubernetes clusters for different environments using open-source solutions. This required careful planning, as we could not rely on external support in case of problems.

Secondly, the company's data centre had to be seamlessly integrated with the existing AWS infrastructure. This required working with the network and development teams to ensure reliability and compatibility.

The result was a significant reduction in AWS costs, saving hundreds of thousands of dollars. This migration allowed Thryv to optimize costs while maintaining a highly efficient and reliable infrastructure.

In addition to Thryv, you've worked with industry leaders, including Europe's largest clothing retailer, H&M. Can you tell us what exactly you did for them?
During my collaboration with GDC (ICL Services), a key partner of the world-famous Japanese IT giant Fujitsu, I worked on the H&M project as a systems engineer. The main task was to provide 24/7 support for H&M's infrastructure in Europe, Japan, Korea, and Australia. I managed a platform that kept over 1,000 servers running smoothly, installing patches and updates, documenting procedures, monitoring system stability, and resolving technical issues quickly. I also implemented new functionality at night so as not to disrupt shop operations.

What other industries have you had projects in?
For example, in the financial sector. I completely transformed the development processes and infrastructure of one company. When I started the project, the client was updating their applications every six months, which is extremely rare in today's competitive market.

I set up a cloud infrastructure, migrated applications from traditional virtual machines to a microservice architecture, implemented CI/CD processes to automate deployment, and set up monitoring and incident reporting systems. I've also developed backup solutions, automated testing, and implemented GitOps practices that accelerate experimentation with infrastructure configurations. In addition, I trained developers on new approaches and tools.

The result was a significant reduction in the update release cycle and improved application reliability, fault tolerance, and scalability. Developers were able to focus on writing code instead of spending time on operational tasks.

Another project was a collaboration with a leading US software vendor. As part of it, I automated processes for Playdots, a mobile game studio. I designed and deployed infrastructure in AWS, set up Continuous Integration/Continuous Delivery pipelines, created Helm charts and Container files to automate application deployment to Kubernetes, and implemented GitOps. Some applications were already running on microservice architecture, but some had to be migrated from legacy systems to reduce developer burden and increase efficiency.

Is there any specificity depending on the sphere?
Of course. In the financial sector, the emphasis is on regulatory compliance. Regulators set strict rules for processing, storing, and transmitting data. Auditors also have high-security requirements, which drives the choice of technology and architecture.

Auditors also have specific security requirements, so the choice of technology and architecture must be based on compliance. Companies use approaches such as infrastructure-as-code, automated testing, data encryption, strong identity management protocols, and differentiation of rights and access. Comprehensive logging and monitoring solutions are used to increase transparency.

And, for example, in the gaming industry, you need to consider periods of high demand. During 'events,' the load on services increases. The architecture needs to be configured in such a way that it scales when the number of requests increases and optimizes the resources spent when demand drops.

In retail, customers have similar challenges due to seasonality. And if a company runs a sale, all discount information must be correctly displayed in the database. You need a single data centre that is synchronized with the IT infrastructure of the shops. In the case of big players like H&M, it is necessary to take into account that in each country, the conditions of promotions will be different. The company can use an extensive network of data centres that receive data from a single source of information.

You've had experience both automating application deployments from scratch and combining new and legacy technologies. In your experience, which is easier and more efficient?
It really depends on the project and the client's requirements. If speed and flexibility are important, or current systems severely limit development, greenfield development may be better. If the key factor is budget or the risks of implementation are too high, it is worth considering a phased system upgrade.

What are basically the best practices for migrating applications to a microservice architecture?
Use minimal container images to reduce the amount of data that needs to be transferred and stored. This reduces startup time and improves security because there are fewer potentially vulnerable components in the containers. Each microservice should be deployed in a separate container to allow for process isolation and independent scaling.

CI/CD integration to automate testing and deployment of changes is mandatory. The methodology allows for faster iterations and less chance of errors. And, of course, regular scanning of container images for vulnerabilities is a critical part of the job, without which infrastructure security cannot be guaranteed.

Many organizations are now deploying Kubernetes using the MultiCloud approach. Is this a general market trend?
Yes, but there are pros and cons to this approach. The pros include flexibility, as organizations can choose the right vendor for their needs and optimize costs. Dependency on the provider can also be reduced.

But, managing multiple cloud providers and on-premises resources requires complex coordination. IT infrastructure complexity is increasing. In addition, compatibility and integration between platforms must be ensured, which is an additional cost and labour-intensive.

Another clear trend is the use of artificial intelligence. In your opinion, how exactly can DevOps change in the near future under the influence of AI/ML?
I think DevOps will continue to exist, although some of its approaches will gradually change in line with new trends. There will be a growing demand for specialists capable of customizing and deploying AI/ML solutions in infrastructure.

AI can be used to solve a wide range of problems, such as creating more efficient and accurate automated testing systems, improving monitoring, and managing systems by analysing huge amounts of data. AI is also capable of identifying unusual patterns of behaviour that may indicate security breaches.

Vadim, Apart from creating innovative products, you also invest a lot of time in promoting the industry.
Yes, I try to give back to our community as much as possible and promote the IT industry in general. I participate in writing scientific articles together with colleagues from our industry as a co-author. I am also invited as a judge to various IT competitions, where I evaluate the participants' work and help select the winner.

One of the recent events was Hackathon Raptors 2024, dedicated to the development of interactive educational games.

You currently hold more than 20 certifications, including the top Red Hat Certified Architect. What are the benefits of certification for engineers, and which vendor certifications do you favour?
Continuous learning and development is an integral part of working in the IT industry, where technology is changing rapidly. Certification not only enhances your professional knowledge and skills but also opens up new career prospects. It makes you more competitive, especially if you have unique or in-demand certifications. In the eyes of employers, it is proof that you are capable of working with modern solutions and are ready for complex tasks.

I pay special attention to certifications from Red Hat. Red Hat is a global leader in the world of open-source software and has had a huge impact on the development of open-source technologies. Their solutions, such as Red Hat Enterprise Linux, OpenShift, and Ansible, are widely used by leading corporations, including Amazon, IBM, JPMorgan, and many others. Red Hat products are the foundation for robust and scalable enterprise solutions, making their certifications extremely valuable to engineers.

Becoming a Red Hat Certified Architect (RHCA) is a serious challenge that requires not only in-depth knowledge but also practical experience. This status confirms expertise in a wide range of technologies, including containerization, automation, security, and application development. There are very few RHCA-level specialists in the CIS countries, which makes it especially meaningful for me to be part of this community. This is not just an achievement. It is an opportunity to be at the forefront of technologies that are changing the world.

In addition, Red Hat is actively involved in the development of open source projects such as Kubernetes, which strengthens their leadership and contributes to the developer community around the world. For me, being a certified specialist in such a company is not only a professional achievement but also a way to stay at the forefront of technologies that are shaping the future.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics