A Conversion Conversation with Rangle.io’s Franceska Viray
During my stint as a PM at Ritual, I was fortunate enough to have Franceska as the QA lead on my Pod. She’s an all-star Quality Analyst who brings to the table a wealth of knowledge about QA and agile. On countless occasions her thoroughness and attention to detail has caught issues with features I wanted to launch and Experiments I wanted to run. She even put up with my endless dad-jokes.
I recently caught up with Franceska to chat about the importance of QA when running experiments, the misconceptions around, and the skills needed to excel at QA.
Rommil: Hi Franceska, so great to catch-up with you! How have you been?
Franceska: Hey Rommil, happy new year! I’ve been good. Great to “sync” up with you again.
LOL — “Sync”. Classic Ritual-jargon! And yes, great to “sync” with you again. So obviously, you’re not at Ritual anymore — could you share with us where are you now and what you do there?
I’ve recently joined Rangle.io as an Agile Quality Analyst. In addition to the traditional QA tester tasks, I also help identify, facilitate and coach the quality efforts on projects as well as help our clients adapt to or improve their quality practices.
“Quality should be a team effort and initiative.”
Very cool. As you know, there are a lot of misconceptions about what Quality is about — could you share some of the ones you’ve encountered during your career?
A big misconception that I’ve encountered and observed often is the idea that quality is the sole responsibility of the tester or QA team. Whether a team is following an agile or waterfall approach, the evaluation of the feature or product in regards to quality often ends up at the end of the life cycle once it’s in the hands of the QAs. However, Quality should be a team effort and initiative.
Another misconception, more geared to the testing aspect of Quality is that automated testing can replace the need for manual testers. Although automated testing definitely has its benefits wherein it plays an important role in supporting QA testers, automation can never be as accurate or as meticulous as a human being. The human touch is still needed to identify specific issues, to truly assess the usability of a system and to catch what automation cannot.
I definitely know you’ve saved our butts a more than a few times!
As long as I’ve been involved in product, I’ve always known Quality teams to be understaffed — in situations like these, how do you balance the need for quality and the need for progress?
To meet both needs, I think that it’s important that Quality is involved from the beginning of the development process and should be seen as part of the acceptance criteria for the feature or product. To do this, a team should first define what Quality looks like, how it will be measured and how it will be monitored and maintained. Taking a proactive approach like this opens up the opportunity for your QAs to contribute their expertise in the earlier stages which helps reduce rework, blockers and bugs. In addition, when it comes to the “Ready for QA” bottleneck of tickets, adopting a “Shift Left” approach to testing definitely helps meet the need for progress without sacrificing quality.
“…it’s important that Quality is involved from the beginning of the development process…”
Define success and how it will be measured. It’s such a common theme you’d think we would have gotten it right already lol
Turning our focus towards Experimentation, what steps should teams take to ensure Quality is maintained while running experiments?
Try to ensure that the experiments running are not impacting other areas of the product or feature. For example if your experiment is evaluating the placement of a component on a website or app, be sure to check that any surrounding elements or user flow are not impacted. Shy away from running experiments that would introduce new dependencies or bugs.
Define a baseline and monitor it often — For companies running multiple experiments at once, it may be hard to determine how or if one experiment breaks the other. Sometimes it’s not until you start to review the results that you realize the experiment stopped running 3 days earlier than you intended because another experiment or a new feature interfered. By establishing a baseline you get an idea of what success, failure, off-track etc looks like for the experiment and monitoring this baseline allows you to take a proactive approach to ensuring the experiment is still valid.
That’s interesting. Experiment interaction is when Experiments that are run at the same time impact each other by either suppressing results or amplifying them — but this is an angle I didn’t think about.
So then, how do you suggest teams manage Quality when several experiments are running simultaneously?
- Tracking — Depending on the company size it could be extremely difficult to be aware of what’s being released and when. Create a shared calendar to help track experiment launch dates and end dates.
- Checklist — Experiments ready for launch should undergo a quick checklist to help ensure each experiment running is independent from another. This checklist will vary depending on how experiments are built within your organization, however it should list items that are applicable to any experiment launched (ie. Experiment IDs, logging, communication plan etc) as well as required tasks to remove the experiment.
“Create a shared calendar to help track experiment launch dates and end dates.”
In your opinion, is there a role for automation in QA when it comes to Experimentation?
When it comes to QA and automation, it’s always better to approach the decision by asking what benefit will it have in the long run?
To determine if its worth it, I’d ask:
- How long are you looking to run the experiment?
- How often will changes be made to the experiment?
- Would automating it take longer than just running the tests manually?
- What is the experiment measuring? Is this something automation would catch?
- Is there data/tools to support automation?
Connect with members of the Experiment Nation Directory
|Photo||Name||Location||Short Bio / Specialities||LinkedIn URL|
|Silvia Serrano Mateos||Madrid||CRO & UX Specialist||https://www.linkedin.com/in/serranosilvia/|
|Adam Cheal||Lincoln, UK||Product Manager||http://www.linkedin.com/in/adam-cheal|
|Geoffrey Bell||Charlotte||A/B Testing Strategist||http://www.linkedin.com/in/geoff-bell-62a03617|
Should companies have a dedicated QA function or can product teams QA things themselves?
QAs have a particular skill set and mind set that are needed in the development and delivery of products. It’s important to remember that QAs don’t just test things and that Quality is much more than just that, so sure you could teach product teams to QA things themselves (which you should!) but having a dedicated QA function enables you to ensure quality is being met at each stage of the development lifecycle.
What skills should those interested in QA have or develop?
- Communication — QAs work closely with every member of the team: developers, product owners, stakeholders etc. So whether you’re working through a bug with a developer, clarifying the acceptance criteria with the product owner or reporting on the overall quality of the product with stakeholders, QAs need to be able to effectively communicate with each unique role.
- Organization — Test plans, test case management, reporting and ticket management are just some of the day to day artifacts you’ll work with as a QA so staying organized as requirements change is definitely key.
- Discipline — Testing is no doubt a repetitive process! This may lead some to lose focus, so it’s important that QAs be disciplined in a sometimes dull environment.
Finally, it’s time for my favorite part of the conversation, the lighting round!
What is harder to QA — Mobile Web or Mobile Apps?
I’d have to go with Mobile Web, I wouldn’t say it’s necessarily harder, just much more tedious.
Pinay ka ba?
At least you respect your elders LOL!
Franceska, it’s been a pleasure as always, salamat.
Connect with Experimenters from around the world
We’ll highlight our latest members throughout our site, shout them out on LinkedIn, and for those who are interested, include them in an upcoming profile feature on our site.