Led 15-person multi-disciplinary design thinking team for cloud-based
Data Server Manager (DSM). Led customer engagements and feedback analysis
to design a seamless workflow integrating four previously separate products
for database performance monitoring, administration, configration management, and optimization.
Developed interactive prototype using jQuery, Dojo, and Flot.
Used agile methods to deliver iteratively to Beta customers.
As the design thinking team lead, I was responsible for championing a process and development
philosophy focused on user-centered results. This meant defining release hills that told
user stories. It meant defining those goals through early and ongoing partnerships with sponsor users.
And it meant frequent design playbacks to walk through those stories with designs that align the team and
stakeholders around those user goals.
2. Led Organizational Kickoff
To kickoff design thinking in the organization, I held a meeting with the 200 or so extended team, including
UI and backend developers, writers, QA, support, sales enablement, managers, and UX team members.
I outlined the design thinking process and how we were to focus the release milestones on user-centered
hills. I worked closely with development architects and other team leaders to customize our
agile process to align developer tasks to user stories and epics to the user-centered release hills.
Through iterative brainstorming sessions, I led the extended team (including sponsor users) to define
release hills oriented to user-centric Who, What, and Wow. 'Who' is a hands-on
user role, clearly supoported by user research and personas. 'What' is the
the goal-oriented task that the user will accomplish. And 'Wow' is a measurable
outcome that provides significant business value to the users. The hills to the right/below
were supported by additional detailed use cases and goals.
4. Conducted Customer Site Visits
Early in the project cycle, a lead development architect and I made several visits to
customer sites — our sponsor users. We made special effort to observe multiple
user roles at work and observe the interaction among them. This provided enormous insight
into the culture, task flows, and painpoints. We brought all of this observation back to
the design team for debriefing sessions focused on brainstorming design solutions.
5. Created Scenarios and Storyboards
The video to the right/below shows an example scenario / storyboard that we used
to guide our design for monitoring and performance tuning. Each step in the storyboard
starts with a user goal and illustrates interactions and backend support needed to
satisfy that goal. I created this storyboard based on dozens of interviews with
users from multiple roles. The personas link below shows some of the details supporting
The video to the right shows an interactive set of mockups illustrating how a user would use the
menus and other navigation elements to quickly switch from enterprise overview (Home) to administration
browsing the database objects, to individual database monitoring and drill-down on specific objects
like in-flight statements. I created these mockups to guide the coding team in their development after a
lot of brainstorming and experimentation by the design team.
7. Designed Enterprise Overview
The Enterprise Overview, or what we eventually called "Home", was an essential part of our design.
Our goal was for the tool to be able to monitor up to 2000 databases simultaneously and give users
the ability to quickly focus on the highest priorities. Thus the design incorporated the ability to
sort by number and severity of alerts, to filter on names, tags, or groups, and to create custom
groups of related databases. One of my lead designers, under my supervision, created the interactive
prototype in the video to the right/below.
8. Designed Query Tuning
Query tuning was one of the two most common and highest priority tasks that our users identified.
Poor performing queries are a frequent cause of slow response times and application outages for
both transactional and warehousing databases. Our design goal was to get users to suggested solutions
quickly, but also to provide the underlying data for analysis and for justifying system changes.
One of my lead designers created the mockups in the video to the right/below, based on step-by-step
analysis that I created along with sponsor users.
9. Designed Outlier Gauges
One of the common problems identified by our users was performance monitoring thresholds are difficult
to set and maintain because the thresholds change based on time of day, workload, and frequency of execution.
To solve this problem, I worked with the backend architects to design a learning algorithm to identify
normal behavior at the database and statement level. I then led the design team to create multiple UI
mechanisms to surface normal and outlier states in the user interface. The guages to the right are what
we used on the Enterprise Overview to identify the databases needing attention.
10. Designed Database Dashboard
Another area where outliers were an essential part of the design was in the database dashboard. Again,
we would identify the normal ranges for performance metrics of a given database based on the the time of day and
day of the week. At a glance, users could see whether everything looked normal, or there was a significant outlier.
Outliers would be highlighted and suggestions for further investigation and fixes would be display in the
context of the outlier. You can find a more detailed video in the More Details link below.
Sometimes design teams spend too much time on static mockups that fail to surface key interaction issues.
I lead my design teams to develop their interactive prototyping skills and to get hands-on testing early.
In this project, I initiated the prototyping and organized the team to deliver parts of a larger
interactive prototype that we co-authored through subversion in eclipse. The graphic to the right/below lists some
of the technologies we used in our prototype. Some of the dashboard and navigation code I wrote was
delivered with the product.
Our sponsor users were given access to early drivers focused on limited scenarios / stories.
This gave us early and ongoing design feedback and allowed us to make changes and improvements
to be delivered in each subsequent driver. We would gather quantitative and qualitative feedback
at each driver -- how long did it take to get up and running? could they monitor hundreds of databases?
could they solve a locking problem? In addition to the sponsor users involved in the project from
beginning to end, we also gathered feedback from various users to get first impressions from
people who had not been previously exposed to the design. As you can see to the right/below, the
final feedback was very positive.