“Automated testing is an integral part of the development lifecycle.”
In android app proejcts we’ve implemented MVP, Rx with Retrofit and Content Provider/SQLite, dagger. Every android apps will always have server communication, storing data in local database, complex ui like naviagtion drawer and recycler view etc, and difficult navigation flow of application.
What we want to achieve?
- Few test cases which should be tested every time before we deliver the apk to client or release on play store?(20-30% automate testing)
- List of test cases of business logic, which can not be auto tested because whatever reason like complex ui, navigation flow etc (40-60% manual testing)
- Continuous Integration
Based on above, there are few questions,
- What to test in auto and manual, how to decide that?
- In automate testing, where to test in MVP – Model-View-Presenter layers?
- What kind of general business logic should auto test for mobile apps – like registration, login, forgot password, update profile etc?
- What type of testing should perform for android apps – unit testing, functional testing, integration testing, manual testing, performance testing, regression testing
- Which tool to use – android testing support library, espresso, uiautomator, Robotium, roboelectric, appium, selendroid, mockito, JUnit
(Feel free to improve check list as we don’t know best practices for testing module in SDLC for android mobile app.) originally asked, here.
Some answers to your questions:
Auto vs. manual: once design/dev cycles have settled, automated tests should be part of the code delivery before releasing. A good trigger here is simply include UI testing in Definition of Done on stories before they’re shipped. For Android, this could be as simple as some Espresso tests that cover new functionality.
MVP layer testing…unit test your presenters and UI test your views. that covers almost anything in models that doesn’t work because model changes are rarely done in isolation of those two layers. high unit coverage in the presenter helps to balance how much UI test are written. see this article for in-depth tutorial.
business logic: at the very least, ALL tasks on the critical paths that users take to accomplish key goals (i.e. your revenue stream, basic adoption). So yes, this includes registration, login, and password features…but might not cover all preferences/configurations and their effects.
type of testing: each type tests different layers/aspects of your application, so ask yourself “what details in the layers of my app should I care about”?
- unit is for basic code validation, so yes to that, always. that’s just basic dev efficiency 101 there. high code coverage helps you catch bugs early.
- integration: yes, and depends on how complicated your app is, but testing the app with/without dependencies helps isolate who’s at fault when test fails.
- functional tests (UI tests): yes, simple interactions or complete workflow, but it’s about how your users work with your app. some functions of the app can’t be tested without going through a set of other steps. again, align with actual usage and business expectations. map your amount of work here to reality, usage metrics, impact on revenue, etc.
- performance: this is hard, and there are different schools of thought. what we see is that performance ‘checks’ along the way are necessary, but full performance testing cycles often impede development unless there’s a high degree of maturity and process in the team/org.
- regression: don’t leave regression to a huge task towards the end! smaller sets of regression informed by the changes you’ve made help to reduce the number of defects caught in late-cycle regression testing. earlier means smaller, and don’t forget that we’re dealing with a very fragmented Android ecosystem so multiple devices/platforms/conditions needs to be included in regression strategy!
tools: you’ve pretty much nailed the current toolchain. for Android UI testing, Espresso/Dagger/mockito is a huge win. keep these types of tests small and focused. for end-to-end testing, Appium is still your best friend, but there are things that even it can’t do (like visual validation and certain popups) that you’ll need to seek beyond them to automate.
Also, while I completely understand your statement “can not be auto tested because whatever reason“, I think that’s a big red flag and the details matter a lot. The choice of auto vs. manual should be business decision about how to achieve velocity goals, not about technical limitations and shortfalls. I hear this all the time from customers until they realize that the right tech enables them to achieve the level of automation that’s right for them.
There are two pieces of research I assisted with this year that I think will help this conversation:
Hope this and my research above helps your work.