Conference Topic Overviews

Wednesday, February 28th


Keynote Topic - Testing is Not A 9 to 5 Job

Mike Lyles (8:45 am - 10:00 am)

In the past Olympics, we watched Michael Phelps do something that no other Olympian ever accomplished.  We spent weeks watching him win more gold medals than anyone in Olympic history.

It’s easy for someone to imagine that athletes such as Phelps are born winners.  It’s easy to think that it’s in their DNA to experience such greatness.  What many fail to realize is all of the preparation Phelps required before the Olympics.  He spent days/months/years practicing, refining his techniques, modifying his strategies, and improving his results. 

Being an expert tester is no different.  While the art and craft of testing and being a thinking tester is something that is built within you, simply going to work every day and being a tester is not always enough.  Each of us have the opportunity to become “gold medal testers” by practicing, studying, refining our skills, and building our craft.

In this keynote, we will evaluate how good testers can become great as we discuss:

  • Inputs from the testing community on how they improve their skills
  • Suggestions for things to do outside of work
  • Leveraging social media to interact with the testing community
  • Building your own brand in the testing community
 

Risked Based Testing: What Happens When You Can't Test Everything

Jenny Bramble (10:15 am - 11:15 am)

Do you feel like you're under the gun to test everything when your team rolls out a new feature?  Do you worry that your team mates don't understand why you choose to test the items you do?  Are there moments in your life where you deeply question if you can successfully complete the testing requirements of a sprint?  Do you just really like cats?  If you answered yes to any of those items--this is the talk for you.  We will define and discuss risk as a tangible metric, striving to break it down into components that you can use to talk to developers, product owners, business people, and any other stakeholders.  Having a common language of what risk is and what it's made of allows us to decide what we should test and when we should test it.  We will also talk about building a risk matrix and why we should even bother.  Included will be a heavy dose of jokes, storytelling, anecdotes, and pictures of my cat Dante.

 

Leading Agile Testers at Scale

Mary Thorn (10:15 am - 11:15 am)

Mary Thorn has had the opportunity in the past twenty years to work at many startups, creating several QA/test departments from scratch. For the past ten years, she has done this in agile software companies. Recently Mary moved from leading small agile test organizations to leading a large agile test organization where she has learned how to lead agile testers and agile testing in large contexts. Mary takes you through what she has learned, identifies the keys to transitioning your test organization as it grows, and discusses the techniques required to lead it through the changes. Agile testing is difficult; training your testers to be consistent and interchangeable across large scale agile teams is even more difficult; and still more difficult is test automation at scale. Join Mary as she shares her experience in creating an automation strategy that works in a large scale context and lessons learned from leading a large agile test organization.

 

How Low Can Your Tests Go?

Dawn Code (10:15 am - 11:15 am)

You've seen the Test Automation Pyramid and talked at length about "layers" of tests-- unit tests, integration tests, API tests, functional or UI tests.  You’ve seen that when multiple layers of testing are leveraged, your test suite contains a LOT of unit tests and significantly fewer higher level tests – and that’s OK. You’ve seen over time that as the system under test changes, there is a cost to maintaining the tests at all layers. Sometimes it works out, sometimes it is an overwhelming burden. Why are some tests harder to write than others? Why are some tests so expensive to maintain? When you design a specific test case or example, how do you determine in which layer of the Test Automation Pyramid to implement that test case?  

Dawn Code will walk through an example feature, and use specific examples of tests to highlight strategies for determining "how low the test can go". You will walk away from this talk with a solid strategy, enabling you to ask a few simple questions as you explore and strategize your testing approach for any system under test.

 

Testing the Next Generation of Technologies, IoT, Mobile, and Cloud

Costa Avradopoulos (10:15 am - 11:15 am)

This session will cover recent trends/challenges in testing IoT, Mobile, and Cloud applications.  Next, we will discuss the components that go into a proper test strategy, such as building a test lab, test coverage, test data, test management, tools, and automation.  Lastly, Costa will walk participants through a recent case study of a large project with a Fortune 500 company.  This project entailed a mobile enterprise app, an IoT solution for embedded devices, and a Cloud solution for dashboards and analytics.  We will cover real project challenges and how best practices were applied to overcome these.

Key Takeaways:

  • Create a proper multi-technology test strategy
  • Learn how to address nuances of IoT, Mobile, Cloud for testing teams
  • Review how to design a world class multi-technology test environment
  • Explore sample frameworks and methodologies for multi-technology environments

Learn from real-world examples of a high profile Fortune 500 client project

 

Transforming Culture with DevOps Principles

Ashley Hunsberger (10:15 am - 11:15 pm)

At the heart of DevOps is the idea that teams break down silos and work together to innovate faster, reducing feedback loops. Ashley Hunsberger describes how companies are using DevOps principles such as iterative improvements, collaborative practices, and incremental testing to transform development culture so that everyone owns quality.  Change doesn’t happen overnight, so how can we make smaller changes that meet an overall vision? Join Ashley as she lays the groundwork for iterative and continuous improvement through a defined mission and goals. Setting your teams up to meet those goals is critical to success. Ashley discusses cross-team collaboration, breaking down the silos to do this, and aligning team members with the necessary skills to help teams meet important quality goals. Also key to success is understanding that - contrary to what we want - we cannot test everything. We need to look to an incremental testing approach instead of a huge, unmanageable test suite, that teams can own. Ashley shares examples of how we can model this in a continuous delivery pipeline, illustrating the reduced feedback loops based on defining risk and understanding the purpose of each suite. Now that we’ve seen the way of DevOps, we don’t want to go back!

 

Crucial, Pivotal, and Radical Candor Conversations - Closing the Gap

Bob Galen (11:30 am - 12:30 pm)

Let's face it. As leaders, we often struggle to create and nurture the sort of communication and conversation that our organizations and teams need. For many of us, it's not a comfort zone or a strength. And frankly, these conversations are often uncomfortable and take a great deal of energy. That being said, today's leaders need to face this shortcoming and become much more skilled and comfortable initiating and executing the sorts of conversations that will engage their teams and deliver value for their customers. In this session, Bob Galen will explore the central aspects of having effective conversations in all directions - downward, outward, and upward. We'll explore some conversation tactics and specific models that will help you craft and deliver much-improved conversations. And beyond the tactics, you'll also get a chance to practice some real-time crucial conversations to truly sharpen your saw.

 

Test Automation: Beyond the UI

Paul Merrill (11:30 am - 12:30 pm)

Are you tired of broken UI tests? Is your team spending weeks replicating manual tests in Selium, Coded UI, or UFT? Have you heard of Service level testing, but wonder what it is and how to do it?

Join Paul Merrill, Principal Software Engineer in Test at Beaufort Fairmont Automated Testing Services for this session to learn the basics of test automation beyond the UI. Learn about APIs and how to test them. If you're tired of brittle, high-maintenance UI test automation, breath easy. There is another way.

In this session you'll learn:

  • Why UI testing is so difficult
  • Alternatives to UI testing
  • The definition of service-level testing
  • The meaning of terms like API, REST, JSON and Microservices
  • A few tools for testing APIs
  • How to test a service right from your computer with no installs and extra software

Join Paul as he walks through each of these concepts and helps you learn with hands-on exercises on your own computer.

 

Criteria to Selecting the Right Tools for your Mobile Test Automation

Eran Kinsbruner (11:30 am – 12:30 pm)

There’s a shift to open-source mobile test automation tools happening today among developers and QA, and it’s not just happening in mobile testing. Many mature technology sectors are adopting lightweight, flexible, vendor-transparent tools to fulfill their need for speed and integration. As with many free and open-source software markets, however, having a plethora of tools to choose from complicates the process. How can you learn, integrate and deploy in your own environment? In this workshop, you will learn a unique decision-based matrix to help QA managers and test automation leads evaluate the right test automation tool for their environment.   

The evaluation matrix addresses critical considerations such as ease of script development with the test development language and ease of script execution within the tool. Other evaluation criteria include the capabilities to fully integrate tools within IDEs,  cross-platform vs. platform specific test frameworks, and application use cases (heavy UI components etc.). By attending this workshop, you will be better prepared to perform your own evaluation and select the best fit automation tool for your goals.

Takeaways

  • Learn how to choose one open-source test framework over another for mobile and cross-browser testing
  • Navigate a test automation comparison guide that can be applied in every mobile/web project
  • Specifically for mobile, Distinguish amongst the top five mobile test automation frameworks such as Appium, Selenium, Espresso, XCTest and Calabash

 

Performance Engineering in a DevOps World

Mark Tomlinson (11:30 am - 12:30 pm)

Learn about overcoming the challenges of transforming to DevOps with performance in mind. What, how, and when you measure all changes in a DevOps world, and with increased automation everything moves faster, and decisions to act and react can happen even without a performance engineer’s explicit approval. How can you leverage your performance engineering expertise across a deployment pipeline and translate your optimization insights to inform operational automation and escalation processes?

Session Takeaways:
This session will leverage real-life observations and learnings from organizations failing, struggling, changing and succeeding at integrating performance engineering into their DevOps practices. We’ll review the changes required in your thinking about performance measurement across the life of a system and its components. We’ll discuss the evolving understanding of the value of performance work with regard to both engineering effort and system management.

 

ATDD/BDD Enables DevOps

Ken Pugh (1:30 pm - 2:30 pm)

The continuous delivery of business value by DevOps is enabled by reducing defects that cause loopbacks and delays, by making testing take less time, and by decreasing the size of features that flow through the pipeline. Acceptance Test-Driven Development (ATDD)/Behavior Driven Development (BDD) helps with these aspects of software delivery. With ATDD/BDD, product owners, testers, and developers create tests that describe the desired behavior of the system. By defining tests up front, an application can be designed to be easier to automatically test. Defects can be discovered earlier in the development cycle. Work items can be split by acceptance tests. In this session, Ken Pugh introduces ATDD/BDD, explains how it works, and outlines the different roles that team members play in the process. Ken shows how acceptance tests created during requirement analysis decrease ambiguity, increase scenario coverage, help with effort estimation, and act as a measurement of quality. 

 

Stempathy - Why Quality Professionals are More Important Now Than Ever!

Ann Hungate (1:30 pm - 2:30 pm)

“STEMpathy,” a term coined by academics and made popular by writers is the call to action for today’s educators to bring liberal arts into STEM curriculum – but we can’t wait for students to graduate, we need empathy in software development today! 

Brand and customer interaction are digital before they are personal.  Our experiences as employees, citizens, and participants are shaped by the digital requirements of registration, classification, and interaction.  How can we make these digital experiences kind, intuitive, and welcoming?

Quality professionals are the first and best candidates to bring empathy to today’s software development practices – we put the customer before the technology, we put the accuracy before the tools – we care ferociously about the people who use the system and want to find problems before customers do.

Come to this session to learn more about STEMpathy and how you can bring a few targeted practices to your team and drive up the user experience immediately! Leave with a stronger appreciation for your team, your impact, and your perspective.

 

Which Tests Should We Automate? 

Angie Jones (1:30 pm - 2:30 pm)

More and more teams are coming to the realization that automating every single test may not be the best approach. However, it's often difficult to determine which tests should be automated and which ones are not worth it. When asked “which tests should we automate?”, my answer is always “it depends”. Several factors should be considered when deciding on which tests to automate and many times that decision is contextual. Join in on this highly interactive session where, together, we will explore features and associated tests then discuss whether the tests should be automated or not considering the factors and context provided.

Takeaways:

  • Identification of the key factors to consider when deciding which tests to automate
  • How to gather the data needed to make these decisions
  • A formula that can be applied to any test to determine if it should be automated or not
 

How to Optimize your Testing Process: Finding the biggest opportunities to release faster and with less issues

Kevin Lee (1:30 pm - 2:30 pm)

As mobile teams get bigger and need to move faster, they need to be able to increase the efficiency of how they work.  The pressure to release higher quality app experiences means that testing cycles have become more important.  How do mobile teams continue to increase quality without increasing cost and release times?  How do teams work more efficiently as they grow?  How does Security within app factor into the Testing process?  When do companies need to invest in creating an internal mobile lab for testing versus outsourcing with the right partner?

This breakout will address ways to release faster without sacrificing quality, security or time to market.  We will discuss how leveraging the right tools, building agile processes and creating practices can dramatically increase collaboration and efficiencies across teams and offices. And with the rising security risks and need to protect user information, we will cover how mobile apps can be more effectively reviewed and tested for security vulnerabilities.

 

Testing In The Dark

Rob Sabourin (2:45 pm - 3:45 pm)

Isn't it amazing. Stakeholders drop software on our desk and expect us to test it with no requirements, no design and no product knowledge whatsoever. About the only clear thing is the absurd and unrealistic deadline. We are expected to bend over backward, spread magic pixie dust and heroically test quality into a product we never heard of before. But testing in the dark is not impossible - and as Rob Sabourin shows it can even be a very valuable and fun experience.

Learn strategies to emerge from a murky fog into clear meaningful quality insights. Leverage unlikely sources about what stakeholders care about and what users really need the software to do. Rob will introduce you to methods of reconnaissance style, charter driven, and session based, exploratory testing and will help you provide meaningful estimates to stakeholders with hardly any hard information about the software under test. Rob shares recent experiences testing in the dark on chaotic turbulent projects turning his product ignorance into a testing superpower. Join Rob’s quest to find important bugs’ fast, testing in the dark, and you too will see the light!

 

I Hate Metrics, I Love Metrics

Shaun Bradshaw (2:45 pm - 3:45 pm)

Metrics presentations have been a staple of software conferences for years. But, why are they so popular? Businesses rely on data to make decisions and metrics allow them to roll up data into bite-sized morsels of deliciousness for managerial consumption. While metrics can help leaders make good business decisions, sometimes the numbers are “massaged” in a way that doesn’t realistically portray what’s happening (a.k.a. a watermelon project, green on the outside, red on the inside). Ultimately, there’s validity on both sides of the debate. Metrics can suck! Sometimes metrics imply something totally different from reality, other times, they provide valuable insights that can guide efforts with better questions and decisions on how projects or teams should proceed.

Shaun Bradshaw, once referred to as the Minister of Metrics, will discuss various aspects of metrics, particularly how they relate to product quality and testing. We'll explore both the dysfunctions arising from “objective” metrics as well as what makes metrics useful and how they can be used for good. Walk away with a better sense of how to utilize metrics that minimizes potential dysfunctions and truly reaps the benefits of the data available.

 

Combining UI and Services/APIs Automation for Comprehensive Testing

David Dang (2:45 pm - 3:45 pm)

The technology landscape for many companies is ever changing with the increase usage of web, mobile, and Service/API. Those technologies are closely intertwined with each other. The same Service can be used on a Website, mobile web, and mobile app. However, many QA teams are treating those technology stacks as individual silos for testing; each stack is tested individually. By combining technology stacks into a single scenario, QA teams can gain testing efficiency across technology stacks. Companies can utilize a single automation test containing both UI and Service/API steps; allowing increased coverage on both UI and Service/API platforms. Furthermore, this approach helps to pinpoint defects and the root cause of failures.  

The goal of this presentation is to help you identify the following:

  • Services/APIs automation
  • UI automation
  • Benefits of combining UI and Services/APIs automation
  • Tools compatibility for integration
  • Process to combine UI and Services/APIs automation
 

Mobile Testing: Challenges and How to Handle Them

Philip Lew (2:45 pm - 3:45 pm)

Now that we’ve gotten beyond the initial shock and prevalence of mobile applications, we’ve come to realize that it’s not just about making apps work. In chasing the mobile market, we often don’t really understand or choose to ignore the differences in the mobile platform when it comes to designing and building a successful app. Of course, the mobile platform is smaller, but what else do you need to consider? To be successful, you need more than just “it works.” Phil Lew explores the top mobile quality challenges, and discusses how to approach and solve them. Some of these challenges include platform proliferation, internationalization, usability, when—and when not—to automate, assembling your mobile test lab, and how to handle the segmented performance issues with device, network, and server. Want to find out when to use simulators versus real devices or when to use remote lab services versus your own local lab? Come find out as Phil outlines the different approaches for optimizing your efforts in the platforms you test and the tools you use. 

 

Consumer-Driven Contract Testing with PACT

Ron Kuist (2:45 pm - 3:45 pm)

Are you lost in a sea of cross-product integrations and dependencies?  Have you experienced product issues because of deployments from another team’s solution that you integrate with?  After the scramble to resolve the issue we often leave ourselves asking what could we have done to prevent this?

Ron Kuist will discuss a testing technique called consumer-driven contract testing and introduce a framework that allows consumers and providers to share their tests to ensure that functionality and contracts are adhered to.  He’ll demonstrate how PACT can bring clarity to micro-service and cross-product chaos.  As applications adopt micro-services architecture and move away from monolith design, we need to rethink our test strategy to ensure that we can deploy enhancements to our services and continue to support legacy functionality.