2 min to read
Cloud Adoption and Code smells
A systematic study of code smells on recent development work.
The code quality of software applications is usually affected during any new or existing feature development or in various redesign/refactoring efforts to adapt to a new design or counter technical debts. At the same time, the rapid adoption of Microservices-based architecture in the influence of cognitive bias towards its predecessor Services-oriented architecture in any brownfield project could affect the code quality. Even a thoughtful attempt in the paradigm shift to microservices is an iterative, massive investment, but code rework in this process too attracts technical debts. Technical debt is inevitable; controlling and reducing the impact is the only alternative. Moreover, owing to several factors
- challenging deadlines, lead time to market, cost constraints,
- ignoring warnings, bugs, and code smell from the static code analysis tools,
- the porting code to a more recent version of the programming language or framework adds more code smells, anti-patterns,
- and security vulnerabilities.
hence, few organizations are resorting to Low-code/No-code (LCNC) platforms as SaaS to avoid this churn. No-code/low-code platform providers are leveraging microservices and serverless architecture for building their platforms and products. This study attempts to cover both the aspects of code smells in monolith and microservices architectural styles, and factors influencing the decision-making of LCNC platform adoption.
A systematic study of code smells from the public datasets acquired from other research work on the monolith codebases and prepare a dataset for Microservices projects with static code analysis tools to get code smells. There are artefacts created in Microservice architecture for Infrastructure as Code (IaC) for CI/CD pipelines and Containerization. These artefacts are also a candidate for code smells, as researched by a few. This study factor in Dockerfile, YAML, and other IaC artefacts for code smells. Perform exploratory data analysis of code metrics data from research work in monolith software and data collection from microservices-based code repositories to generate code metrics. These code metrics would undergo systematic data analysis, feature engineering, and machine learning model evaluation to study the patterns, the significance of code metrics, and factors leading to no-code/low-code platforms to provide recommendations over microservices/monoliths.
Data class, large class and long method are no more significant code smell found in microservices than monoliths, while unnecessary/unutilised abstraction and long statement continue to remain as significant contributors to code smell in microservices. The magic number code smell remains indifferent in monolith and microservices codebases. Deficient encapsulation, cyclic-dependent modularisation and complex method and broken hierarchy are significantly less or none in microservices.
This was the abstract of my MS thesis, read detailed report at DOI: 10.13140/RG.2.2.21689.65126
If you are interested in Citizen Development, refer to this book outline here on Empower Innovation: A Guide to Citizen Development in Microsoft 365
|Now, available on|
If you wish to delve into GenAI, read Enter the world of Generative AI
Also, you can look at this blog post series from various sources.
Stay tuned! on Generative AI Blog Series