What is a Software Architecture Assessment?
It‘s much better to discover a missing bedroom while the
architecture is just a blueprint, rather than on the moving day (by Paul Clements)
In a software architecture assessment, you try to find problems within the architecture / implementation. When can you do that?
- Finding bugs in an early state at time of design
- During development to compare the implementation against the draft
- For maintenance and further development reasons
In practice the assessment is done most of the time after the implementation is done and the project is already in production.
What is TARA?
The tiny architectural review approach(TARA) was invented for situations where an exhaustive method is not applicable. TARA is not scenario based as methods like ATAM, it’s based on industrial experience. It is flexible and easy to use. TARA will safe you time and resources.
When to use TARA?
- No time or focus is given for a scenario-based approach
- The system is already implemented
- Easy assessment, because no other techniques such as quality attribute trees, as used at ATAM, are required
- Designed for a single auditor without many stakeholders participating
- Can be used as a first step before using a more detailed assessment like ATAM to first convince the company of the benefits of software architecture assessment
7 Steps of TARA
1. Context Diagram and Requirements
First we have to find out in which context the system lives and which qualitative requirements have to be met. We also need to find out what key functionalities are available. You can find out the context and the most important functional requirements by asking the team members and users of the system. It is more difficult to find the quality requirements because in most cases the team struggles to formulate them clearly. It is recommended to suggest some quality requirements (non-functional requirements) such as performance or scalability based on the application / system context.
2. Functional and deployment views
Once we have identified the requirements and the system context, we can start drawing the functional structures (runtime elements) and the deployment structure (environment in which the runtime elements are deployed). The result is a so-called functional view sketch drawing.
3. Code analysis
A basic analysis of the code will cover the following information for evaluation:
- Module structure and dependencies
- Measurements like lines of code (LOC), number of classes and test classes or size of binaries
- Static code analysis results like cyclomatic complexity, code duplication, comment to code ratio and code style
- Test coverage
4. Requirements assessment
Now we have to find out how well the system fulfills the functional and qualitative requirements. Now the examiner must assess how well the requirements are met, as on a scale from 1 to 5 or with flags such as high / medium / low. At the end there should be a clear list of requirements and the degree of fulfillment.
5. Identify and Report Findings
When you finish step 4 you will find positive and negative aspects of the system. Everything has to be reported in a sensitive way and the results should be grouped with a heading and labeled with an identifier.
6. Create conclusions for the sponsor
In this step, the concerns of the sponsor must be taken into account. You need to identify your sponsor’s explicit and implicit concerns / questions and make recommendations to support them.
7. Deliver the Findings and Recommendations
With this final step, it’s time to share the results with all stakeholders and anyone who contributed to the review. This can be done by presenting and sharing the documents you have created.
Example of a Web application
Let’s try evaluating a web application and see how the steps work:
The context diagram shows that our web application reads data from an external API for data enrichment, stores and reads data from a database and gets accessed by an internal system. This context diagram helps new engineers get started and gives you a clear picture of the environment in which the application is located. The next step is the functional view sketch, which shows the internal communication between components:
Now let’s review the key requirements that were identified in the conversation with the system developers and what kind of quality attributes are important:
- FR1 — Data maintenance: The User is able to have the full CRUD support with multi select actions.
- FR2 — Data enrichment: The added data gets automatically enriched with data of an external API
- NFR1 — Availablilty: The web application should be available 99.99% of the time.
- NFR2 — Performance UI: The UI should never freeze and respond to basic actions within less than 1 second. Long lasting actions should show a progress bar.
The next step is to analyse the codebase, here’s what we found:
- Implementation Size: 200 Java classes, 34 database tables and 327586 lines of code
- Test Size: 40 Test cases referencing 85 classes
- Structure: One basic Spring application structure with 10 modules
- Tangled Code: 1 basic Java package “com.mysystem.app”
- Coding Standard: Google Java Style coding standard is used.
Now we report what we have found in the application and evaluate it against the requirements from above:
- Finding 1 — The CRUD functionality including batch processing is given and works as expected. 5/5
- Finding 2 — Data enrichment is done by a dedicated component and can be scheduled by a cron runner module. The users actually expected to have a UI for changing the scheduled automatic enrichment jobs, but only file based adjustments are possible. 4/5
- Finding 3 — The application is deployed to two servers with a load balancer managing the traffic. The different application servers are in the same data center, so a location based fallout causes a complete downtime. 4/5
- Finding 4 — After measuring the arithmetic mean of the UI response time, the UI reponds within 100ms after triggering local action events. 5/5
Last step is to make recommendations and come up with a final conclusion about the project situation:
- Rec 1 — The API doesn’t use any kind of caching system. For improved performance handling multiple client, a intermediate caching system or reverse caching proxy is absolutely necessary.
- Rec 2 — A UI to adjust the times for automatic data enrichment should be implemented. Files changes that can only be done by an IT expert instead of admin users, blocks the user workflow.
- Rec 3 — The deployment of the application should be done to a multi availablity zone environment to increase the availability score.
After checking the result of a TARA assessment, you are able to see the weakness of the system. You are able to estimate how much work is necessary to improve the system and what kind of technical knowledge is missing in the development team. What were the expectations or misunderstandings that led to incorrect implementations?
The application of the procedure clearly shows how quick and easy it is to get an software architecture assessment. So let's go!