Speaker Slides will be available for download for paid conference attendees.  A link and password will be sent out a week before the conference.

Conference Topic Overviews
Thursday, March 3rd


Agile Testing: Learning Journeys for the Whole Team (Keynote)
-    Janet Gregory

When agile development first gained popularity, agile meant collocated teams, including testers, programmers, analysts, and customers who were expected to perform many functions. As agile methods have spread and expanded, many organizations with globally-distributed teams are facing challenges with their agile deployment. Having worked with many such teams, Janet Gregory has observed ways that distributed agile teams can include testing activities to help deliver a high-quality software product.

Janet shares her experiences and lessons learned that show how testing activities can help develop open communication and share data and information within a team or across continents. These ideas can help create the common understanding for what the team is building so the team can work together – with fun.


 
Conference Presentations

Designing a World Class Mobile Device Lab
-Costa Avradopoulos

Designing and maintaining a mobile device lab is a high priority when developing and testing today's mobile apps, but where do you start? Should you use a public cloud or set up your own lab on-site? Or maybe you want to do a hybrid and use both? Even more important, what kind of devices should you include? In this session, Costa will cover the benefits of having a device lab in the cloud vs. local, how to choose the appropriate mix of unique devices to ensure proper test coverage, core components of building a world class device lab, and evaluation criteria to consider when comparing options.


The Bounty Conundrum - Incentives for Testing
- Shaun Bradshaw

What comes to mind when you think of a bounty: Perhaps Dog the Bounty Hunter, a reality series featuring a biker dude with a bad mullet, or maybe Django Unchained, Quentin Tarantino’s latest film about a slave-turned-bounty-hunter? Presenter Shaun Bradshaw doesn’t have a mullet, or a star turn in Django Unchained, but he has witnessed his fair share of bounties used to motivate test teams. In software testing, some organizations regularly use bounty-style incentives to push teams to find more bugs, hoping to improve software quality. But bounties can backfire—more commonly referred to as the Cobra Effect—and create tension within a development organization without improving software quality. In this presentation, Shaun introduces alternate methods to incent productivity. Join Shaun as he discusses a merit-based system that can add a fresh take on incentivized testing. Learn why you should keep bonuses a surprise. Start to reward collaboration versus competition. Understand how to use both subjective and objective measurements in your favor and implement a rewards system that is “safe to fail”. Shaun explains these concepts and more in “The Bounty Conundrum.”



Automate all of the things
 -TR Buskirk

Software delivery requires reliable automation with concerns far beyond test automation. From test framework selection to scalable monitoring system, automation at all levels enables teams to keep their eyes on the prize of providing value to customers.


Mobile Testing: To Automate or Not to Automate?
-David Dang

Companies are moving into mobile technology at a rapid pace, which has significantly increased the need for manual testing in that area. Naturally, companies are turning to automation to help ease the workload. But should you try to automate all things mobile? The answer is not always clear-cut. Mobile has its own set of complications, compounded by a huge variety of devices and OS platforms.
This presentation helps attendees learn what mobile testing activities are ripe for automation as well as those items best left to manual efforts.


Stop Calling it Devops
-    Lee Eason

In the presentation, "Stop Calling it DevOps," Lee cuts through the jargon and gets to the heart of why we are all talking about DevOps in the first place.  He discusses best practices, and three tenants of how to bring about organizational change towards a DevOps culture.  Using real world data, he demonstrates the cost of failing to evolve your development practices, and what you leave on the table by not adopting current best practices.  Finally, he discusses the danger of the label "DevOps," and what we all need to be doing to help mitigate those risks.   



Privacy in the Digital Age
-    Julie Earp

The ubiquity of computing technologies has facilitated an increase in data collection, storage and analysis. These same technologies that provide benefits to businesses and individuals can also create potential harms and introduce numerous threats. Earp’s research focuses on Internet security and privacy issues from several different perspectives, including data management, consumer values, policy, economics and law.


How the 3 Amigos help to Create Better(more valuable & Testable Stories)
    -Bob Galen

If you're using User Stories for your agile requirements, you're not alone. It seems to have become the ubiquitous vehicle for communicating customer requirements to agile teams. And it works incredibly well in this regard. However, many teams are experiencing problems with it.

You see we often forget the "confirmation and conversation" parts of the story. When Kent Beck originated the idea of the User Story, this was his original intent. It was to initiate or inspire a story-level Conversation surrounding Card + Confirmation between a stakeholder or customer and the team. It was face-to-face and interactive. It was also ongoing.
 
The 3-Amigos model reinforces this collaboration thru the primary 3 roles (Amigos) in agile teams: Developer + Tester + Product Owner.

Join Bob Galen in this talk that brings our focus back to the original intent of the User Story; that is emphasizing the Acceptance Criteria (confirmation) and the Story part (the conversation) of your stories across the 3-Amigos. Come prepared to practice writing stories that increase your effectiveness in defining solid acceptance criteria and conversing around your stories.


Agile Testing in the Enterprise
-    Janet Gregory

When agile development first gained popularity, agile meant collocated teams, including testers, programmers, analysts, and customers who were expected to perform many functions. As agile methods have spread and expanded, large organizations and those with globally-distributed teams are facing challenges with their agile deployment. Having worked with many such teams, Janet Gregory has observed ways that testing in agile teams can still help deliver a high-quality software product. Whether your agile team is part of an enterprise solution, or part of a distributed team is scattered across time zones with individuals working remotely from home, or is part of an offshore outsourced project, you’ll take away methods and tools to help develop open communication, deal with cultural differences both within an organization and across continents specifically related to testing activities.


SpecFlow and Selenium WebDriver best practices
-    Ellery Horton


Creating automation is easy. Maintaining automation is hard. When automation strategies fail, it is not because of a lack of automated tests. More often than not, automation strategies fail because the tests were not maintainable. These tests fall into disrepair and snowball beyond the automation team’s ability to fix them. Eventually, more time is spent fixing the automation than is spent finding defects in the customer facing products. This erodes the credibility of the automation team and the investment in automation the business has made.

Out-of-the-box, SpecFlow and Selenium WebDriver offer little protection against this fate. Just using these testing tools is not sufficient. How then, can we avoid the pitfalls of unmaintainable tests? What best practices can we follow to help us succeeded at automation?
Join this session to:
· Learn about SpecFlow and Selenium features that can help make your automation more reliable and easier to maintain.
· See how the structure of your test automation is critical to the longevity of your automation effort.
· Get tips on how to better integrate Specflow into a continuous integration process.
· Leverage these best practices into your own test code and automation frameworks.


Leading the Next Generation of Testers
-    Mark Lyles


Every parent, coach, counselor, manager, mentor, teacher, or advisor, has experienced the complexity of guiding others in the right direction. If you’ve been one or more of these, then you have likely felt the desire in your heart and your mind to be part of a positive change in someone else’s life. This becomes the core of your passion. The joy of knowing that you were part of helping someone else grow is usually more than you can imagine.
If you’ve been in IT for long - especially if you’ve been in software testing for long –then you have experienced the exponential growth in technology over the last few decades. You’ve seen the world move from large mainframe computers to desktops, to laptops, then to smartphones and mobile devices, and now wearables. Everywhere we turn there is a new software package that can “help you test better”. The strategies for software testing are changing. Yet, as with the roles listed in the introduction of this article, we must understand the core values and goals of our craft. And we must instill this knowledge to the next generation of testers that are joining the workforce every day.
As a test manager or leader in a testing team, it is your responsibility to give direction and guidance to the team, which will not only help them to be successful but create a successful outcome for your organization. In this session, we will take a deep dive into the world of today’s test manager. We will discuss the core concepts, which ensure success in the position. You will hear inputs and suggestions from interviews with industry leaders regarding what works and what may not work. Join us as we prepare for the next generation of testers.


Mobile Test Design: What are you missing?
-    Jean Ann Harrison

We all know user experience can instantly make or break any company. For today’s mobile apps that means testing needs to go beyond functional tests, to considerations like performance, accessibility, intuitiveness or recovery from physical limitations of the device. Creating a culture of test design is key to improving a mobile product’s user experience. How much testing is appropriate?

In this presentation Jean Ann will break down the differences between mobile app architectures along with appropriate tests for exact type.  Learn what kinds of nonfunctional tests are important to get maximum coverage.   Expand your thinking about mobile test design beyond the functional to include hardware and OS conditions.

Takeaways include:

•    Distinctions between the architectural differences of mobile apps types to help testers canidentify appropriate non-functional tests.
•    How to design unique non-functional tests that identify improvements to user experience — to increase business value and product ROI
•    How to develop personas to enhance own customer UX tests.
•    How hardware and OS conditions affect software behavior and limitations based on device.

 

Coming Soon . . .


Weighing and Re-weighing the Risks
-    Lynn McKee    

Risk is an important element in any test strategy. Identifying and ranking risks can help focus a team especially when time and resources are limited. Hmmm, sounds like every project I have ever been on! The problem with planning for risk is that things can change, and fast! Every project throws a curve ball or two that forces us to re-evaluate our plans, priorities, resources, and even our sanity. Being able to make timely decisions about what tests matter most is crucial. This skill can make you invaluable to any project, no matter how challenging the project seems!


A Framework for Exploratory Testing
-    Mary Mehrer

Agile software development requires adaptive and rapid testing techniques. When testers need to design and execute tests in a matter of hours or days rather than weeks, they need to incorporate techniques allowing them to work quickly and effectively.

One technique for evaluating application behavior is exploratory testing. In this technique, testers are not limited to following pre-defined steps as in manual test cases. Instead, testers focus on defined test goals within a timebox, which allows them to assess not only what is working per requirements and design but also to follow paths of discovery as they examine application behavior.

Though it may sound contradictory, effective exploratory testing requires discipline and organization. The ability to track coverage, progress and results is a must, particularly in a regulated environment. Session-based exploratory testing provides this capability and have become a widespread exploratory method.

This is a hands-on session that will cover session-based exploratory testing:
•    Defining and managing test charters
•    Prepping for an exploratory test session
•    Running an exploratory session
•    What happens after the session
We will run an exploratory session together on a simple app, where we will take notes an capture our findings and any potential bugs.


Data Strategies for Automated Testing
-Paul Merrill

You’ve written a large suite of test cases over the last few months.  But every time someone changes the data model or wipes the database, you find yourself working extra hours to fix broken test cases!  Test cases that were once consistent, reliable, and passing now look like a wasteland of red fail.

Success in automated testing is tightly bound to how you interact with data, also known as your “Data Strategy”. 

In this presentation we’ll walk through several data strategies and their pros and cons. We'll talk about the constraints you experience in your environment and how to mix and match strategies appropriately.

Join us to explore Data Strategies!

Paul Merrill
Beaufort Fairmont, LLC
beaufortfairmont.com


The Continuous Paradigm: How using a continuous integration system can change the way you test
-    Jared Richardson

Most test organizations are accustomed to slow feedback. Builds occur infrequently and deploys are even less frequent. Testers are rarely afforded the luxury of fast feedback. That’s changing with tools for continuous integration, continuous deployments, and continuous testing. As you begin to experience ongoing feedback from each system, you’ll find it changes the way you think about testing and product development. Come see how to setup your own system and start getting real-time feedback on the state of your product and servers.


From Velcro to Velocity
-    Rob Sabourin

There seems to be a lot of hype about TDD these days?  Test Driven Development is a programming style in which unit tests are constructed prior to coding.  Each increment tested is a new aspect of the software being developed.  Programmers create just enough of a unit test to fail and they write just enough code to pass the failing unit tests.  Test are repeated to quickly catch unexpected changes.

Explore the strengths and weaknesses of this approach. Try it out with Rob using an IDE of black felt, laminated index cards and of course tons of velcro.

During this fun and highly interactive workshop you will create an application with TDD even if you have never programmed before.

Rob let's your compare your results with answers from the teacher edition.  Rob will also lead a group affinity analysis of TDD vs test after unit testing approaches.


What’s in your cup of “T”?
-Mary Thorn

Agile Testers today are being asked to do a whole lot more that just testing. The notion of “T” shaped people was created Tim Brown (CEO of IDEO) coined it in the 1990s to describe the new breed of worker.
I believe that testers, actually – anyone, can contribute a lot more to the business than their standard role traditionally dictates. The tester’s critical and skeptical thinking can be used earlier in the process. Their other skills can be used to solve other problems within the business. Their role can stretch to include other aspects that intrigue them and keep them interested.
We have testers who write product documentation, are scrum masters, are building infrastructure to support rapid release, are taking ownership for security and compliance to standards, are presenting the development process to customers, are visiting customer sites to research how people are using the product, are writing social media content, are devising internal communication strategies, are doing agile coaching, are creating personas and are using their natural skills and abilities where they are best suited to help move the business forward.
In this track we will go over all the ways testers can now bring “value”.


Adventures in Performance Testing Mistakes
-    Mark Tomlinson

“Wow, did I mess that up!! As modern software engineers we know the value of learning from our failures. We also know that it’s better to learn from the failures and mistakes of others than necessarily repeating the same mistakes in our own work. Allow me then to share my own personal “adventures” in performance testing - failures with load test automation, scripting, architecture, requirements, planning and analysis. We’ll talk about testing efforts filled missteps, incorrect configurations, blind spots and false results. Case-studies in how NOT to do performance testing and the potential downsides to such errors in testing."


From Zero to Functional:  How to Build Out Automated Testing On Your Mobile Team From Scratch.
-    Matt Weis

The world of mobile automated testing is a relatively new and a potentially confusing landscape, and many mobile teams have not yet integrated automated testing into their QA strategy.  For the last year, Matthew Weiss has been integrating automated testing into the QA practices of Orbitz Worldwide’s iOS testing team using Appium.  In this talk, learn how Matthew took his team from zero to functional in automated testing over the course of one year.  The lessons from this talk will be applicable to virtually any team implementing automated testing on their teams, regardless of which automated testing tool you plan on using.

Key Takeaway points

  1. How to approach choosing the right automation tool for your team
  2. What resources are required to set up mobile automated testing
  3. Common challenges faced in setting up mobile automation
  4. Common mistakes to avoid