Before an administrator or business leader embarks into developing corporate-wide performance measurement systems, he or she needs to know how to meaningfully measure what matters most to the organization. Following an exhaustive research of how a company leader may improve his or her company and thus be more competitive, a plethora of resources pointed to creating and implementing Key Performance Indicators (KPIs). KPIs are measurable procedures that are fashioned with the intent to inform an organization of what to do in order to strengthen performance and are, in general, regaled as a gold standard of measuring a company’s progress and success (Ahmed, Siantonas & Siantonas, 2007; Behn, 2003; Gabcanová, 2012; Parmenter, 2010, Qfinance Website, 2009). KPIs are a set of quantifiable measurements that can be useful to critically evaluate the existing and future success of an organization (Behn, 2003; Parmenter, 2010; Qfinance Website, 2009). Parmenter (2010) warns about the misuse of KPIs and suggests that many companies are working with measurements they assume to be KPIs. Behn (2003) further posits that performance measures are not the end all and further advocates that different measures be evaluated before embracing KPIs alone.
With the aforementioned cautions, this document will outline examples and possible ramifications of positively moving from the reliance of KPIs for evaluating an organization to utilizing the more effective method of investigating and implementing process improvements.
The Systems Model as a Comparison of Organizations
A system is simply a collection of entities that forms a whole; within each system are a set of interacting subsystems and/or different processes that come together to perform some type of function. Senge (2006) depicts system thinking as a whole made up parts in his rendition of a storm;
“A cloud masses, the sky darkens …. and we know that it will rain. We also know the storm runoff will feed into groundwater miles away and the sky will clear by tomorrow. All these events are distant in time and space, and yet they are all connected within the same pattern. Each has an influence on the rest…You can only understand the system of a rainstorm by contemplating the whole, not any individual part of the pattern” (p. 6). Senge (2006) further suggest that a systems thinking model reflects a learning organization model. A straightforward way to identify a working systems model, that embraces learning, is to compare it to a traditional organization. The Baldrige Performance Excellence Program and the Academic Quality Improvement Program (AQIP) can offer a plausible comparison of system improvement agencies; one that follows a traditional evaluation method, quantitative and one that follows the learning organization model.
System Improvement Agencies
Following the paradigm of systems thinking, in regards to generating organizational improvement, both the AQIP and Baldrige agencies offer tools to arrive at improvements.
The AQIP is one of a number of accreditation or reaccreditation pathways that a university can utilize (HLC Website, n.d.). An integral goal of the AQIP is to ensure universities measure quality systematically and on a continual basis (HLC Website, n.d.). AQIP utilizes a process-improvement paradigm. Their nine categories are, in essence, processes – a series of actions or steps taken in order to arrive at a specific end. The nine categories or processes are fundamental and AQIP persistently encourages quality checks among its universities implying that the checks are a “never-ending improvement of systems and processes that support” a mission (HLC, 2008, p. 7).
Another pathway that a university can utilize is the Baldrige program. It is a federal program that suggests a university can be empowered by seeing its goals reached and its results improved, and as an outcome, can become more competitive. Relying on their education performance criteria, the Baldrige missive suggests that the criteria will give the university the tools needed to examine all parts of its “management system and improve its processes and results while keeping the whole organization in mind” (Baldrige Website, 2013).
As is evident, both the AQIP and the Baldrige models focus on the whole organization. However, as is depicted in Table 1, it would appear that the AQIP paradigm adheres to a process-improvement mentality embracing the benefits of qualitative data, whereas the Balridge favors a quantitative mentality that is assessment/standard driven.
|Table 1: A Comparison of Focuses|
|AQIP Categories||Baldrige Education Criteria Categories|
|Taken from: HLC Website, 2008; Baldrige Website, 2013.|
Although the Baldrige’s five core principles focus on student learning processes, customers, leadership and governance, finance and markets, and the workforce; their primary focus is to equip the university to be more competitive utilizing a series of criteria made up of KPIs (Baldrige Website, 2013). On the other hand, AQIP’s missive for universities which is a telltale characteristic of a learning organization, is “to contribute directly to student learning” which AQIP considers to be “an educational institution’s primary purpose and achievement” (HLC, 2008, p. 7). Thus, rather than compare each part of an organization and its success factors, which is indicative of KPIs, utilizing a systems model enables an organization to move from using KPIs solely to operating within a system process and improvement paradigm.
KPIs vs System Processes and Improvement
Theurer (as cited by Behn, 2003) captures the essence of KPIs impeccably when he states, “Always remember that the intent of performance measures is to provide reliable and valid information on performance” (p. 587). KPIs, then, can be used to measure an organization’s success or lack thereof. However, the hopeful successes will have originated from a corporate strategy, and then from there trickle down to other departments wherein they too will develop their own KPIs. The nature of KPIs, once established, are constant and remain so for the long run; they lack flexibility, and hold people accountable (Behn, 2003; Parmenter, 2010; Qfinance, 2009). Behn (2003) outlines eight managerial purposes for measuring performance, with purpose two being “To control: How can public managers ensure their subordinates are doing the right thing” (p. 589)? Thus, KPIs control an organization and consequences are meted out in lieu of KPIs that are not attained. The performances may offer functional measurements, but how do they align with the big picture of the organization; how do they create an atmosphere of camaraderie and partnership; and how does the organization learn from the measures?
Slater and Olsen (1997) purport, “The fundamental reason we measure anything in a business is to determine when and how we should shift behavior” (p. 38). However, utilizing KPIs alone run the risk of reacting to isolated events or performance results and seeking blame rather than looking at the whole system and taking into consideration the subsystems that interact with each other. Senge (2006) posits “We all tend to blame someone for our problems …. Systems thinking shows us that there is no separate ‘other’; that you and the someone else are part of a single system” (p. 67). Thus, in order to make an improvement or to meet a goal, a change of input or process is necessary rather than shifting or controlling behavior (Senge, 2006; Slater & Olsen, 1997). A systems model paradigm is in order. Senge (2006) would suggest that there be a “shift of mind from seeing parts to seeing wholes, from seeing people as helpless reactors to seeing them as active participants in shaping their reality from reacting to the present to creating the future” (p. 69).
A Study of Organizational KPIs and Organizational Processes
Although there is benefit to implementing a system of criteria such as KPIs for an organization, the aforementioned discourse outlines some of the setbacks of relying on them solely or at all. KPIs can inform, but the question that arises is, what is to be done with the data once it has been collected? Therein lays the main difference between KPIs and process improvements. A system model of process improvements offers a learning environment that is more conducive to organizational improvement. Goncalves (2012) suggests that in “building and maintaining a learning organization you must look for traits, nurture some of them and eliminate others, so you can bridge the knowledge gap in the organization, to allow a successful knowledge transfer into action, from know-how to how-to” (p. 23). A study of organizational processes can facilitate a ‘we’ rather than an ‘I’ mentality and where improvement is seen as a collective endeavor where team building, shared vision, and creativity is embraced (Collins, 2001; Senge, 2006).
The Place of KPIs in a Dissertation
As seen, the act of measuring performances, gauging results, altering behaviors, and controlling outcomes are executed to accomplish some improvement. However, there are reasons that may indicate the same cannot be said of using KPIs in a dissertation.
First, the number of KPIs that are recommended to see success within an organization can be vast, starting with 10 and possibly expanding as each one is attained (Behn, 2003; Parmeter, 2012). A more effective method would be to analyze the overall system, tackling one process at a time. This would make it more manageable to complete in regards to working within a dissertation.
Second, KPIs are quantitative in nature whereas an action research study, although will include some quantitative data, will be heavily peppered with qualitative data. The ethnographic data will in turn lend itself to the partner mentality among all participants. This will allow ownership of an improved process and thus resulting in a larger chance of success.
Lastly, rather than focus on quantitative KPIs, Goncalves (2012) suggests acting on the new learning, adapting behaviors accordingly, and turning the knowledge into action. A vehicle to facilitate this would be an action research study.
Action Studies and Dissertations: The Comparison
Both the dissertation and the action research studies begin with identifying a problem or an issue, and then they seek to discover answers. The dissertation will include a problem statement that identifies what needs to be fixed; whereas the action research study will make a statement that expresses what is being done adequately and then ask how it can be improved Additionally, the traditional dissertation, when complete, is done and may never be viewed again. The difference, then, between the two is that the action research project should generate a life-changing, continual organizational transformation. Guy (1949) suggests that the first level of educational research is fundamental, purporting that it is “characterized as independent experimental research studies which are aimed at the discovery of ‘truth’ from which educational practice may profit; it is concerned primarily with gathering knowledge which will be beneficial to education” (p. 193).
A core definition of action research, as outlined by O’Leary states, that it is “strategies that tackle real-world problems in participatory, collaborative and cyclical ways in order to produce both knowledge and action” (2007). Walker (2014) further lists the following five best practices of action research as:
1) Seeking the strengths of an organization and improving said practices
2) Collecting data on a cyclic basis
3) Ensuring a participatory and collaborative effort
4) Investing in people at the local level while still addressing ethical issues
5) Reflecting, sharing and reporting creatively (p. 5).
The action research dissertation that embraces the above characteristics celebrate positives and incorporate an environment of partnership and camaraderie, rather than a running score card like the KPIs generate (Behn,2003; Garman & Piantanida, 2009; Qfinance, 2009). Moreover, an action research dissertation will tackle one process at a time, but in a cyclic manner where it is continually improving and being innovative and allowing further studies in the future.
In conclusion a multi-cycle action research carried out in an organizational system can have KPIs as a starting point, but the action research must move from a focus on quantitative criteria to one that embraces qualitative process improvements.
Ahmed, A., Siantonas, G., & Siantonas, N. (2007). 13 key performance indicators for highly effective teams. Sheffield, South Yorkshire, GBR: Greenleaf Publishing.
Behn, R. D. (2003). Why measure performance? Different purposes require different measures. Public Administration Review, 63(5), 586-606. Retrieved from http://search.proquest.com.library.capella.edu/docview/197175585?accountid=27965
Collins, J. (2001). Good to great: Why some companies make the leap and others don’t. New York, NY: Collins Business.
Gabcanová, I. (2012). Human resources key performance indicators. Journal of Competitiveness, 4(1) Retrieved from http://search.proquest.com.library.capella.edu/docview/1315216734?accountid=27965
Garman, N. B., & Piantanida, M. (2009). Qualitative dissertation: A guide for students and faculty (2nd edition). Thousand Oaks, CA, USA: Corwin Press. ProQuest ebrary. Web. 22 July 2014.
Goncalves, M. (2012). Learning organizations : Turning knowledge into actions. New York, NY, USA: Business Expert Press. ProQuest ebrary. Web. 25 July 2014.
Guy, G. V. (1949). Recent developments in curriculum research — A selected bibliography. Educational Leadership, 7(3), 193-197. Retrieved from http://search.ebscohost.com.library.capella.edu/login.aspx?direct=true&db=ehh&AN=19017921&site=ehost-live&scope=site
Higher Learning Commission. (2014). AQIP categories. Retrieved from http://www.hlcommission.org/Pathways/aqip-categories.html.
National Institute of Standards and Technology. (2013). Baldrige education criteria for performance excellence. Retrieved from http://www.nist.gov/baldrige/publications/ed_about.cfm
National Institute of Standards and Technology. (2013). Baldrige education criteria for performance excellence: Category and item commentary. Retrieved from http://www.nist.gov/baldrige/publications/upload/2013-2014_Education_Criteria_Free-Sample.pdf
O’Leary, Z. (2007) Action research. The Social Science Jargon-Buster. London: Sage UK, 2007. Credo Reference. Retrieved from http://search.credoreference.com.library.capella.edu/content/entry/sageukssjb/action_research/0
Parmenter, D. (2010). Key performance indicators (KPI): Developing, implementing, and using winning KPIs (2nd edition). Hoboken, NJ, USA: Wiley. Retrieved from http://site.ebrary.com/lib/capella/Doc?id=10366542&ppg=23.
Senge, P.M. (2006). The fifth discipline: The art & practice of the learning organization. New York, NY: Doubleday/Currency.
Slater, S. F., & Olson, E. M. (1997). Strategy-based performance measurement. (cover story). Business Horizons, 40(4), 37. Retrieved from http://ezproxy.library.capella.edu/login?url=http://search.ebscohost.com.library.capella.edu/login.aspx?direct=true&db=bth&AN=9709120490&site=ehost-live&scope=site
Stringer, E.T., (2014). Action research. Thousand Oaks, CA: Sage.
Understanding key performance indicators. (2009). In Qfinance: The ultimate resource. Retrieved from http://library.capella.edu/login?url=http://search.credoreference.com.library.capella.edu/content/entry/qfinance/understanding_key_performance_indicators/0.
Walker, B. (2014, April 13). Action research or traditional experimental research? Retrieved from http://pdhed.com/2014/04/13/action-research-or-traditional-experimental-research/.