summarized by: Evangelista L.W. Palupi
This is a review of a chapter of a Akker’s book about design research. In this chapter “Principle and Methods of Development Research” Akker discusses the role of research in relation to educational design and development activities.
In the first part of Capter 1#, Akker focuses on describing the rationale and basic principle of development research by outlining motive for conducting development research, defenition and aim of developmental research and its key characteristic. Furthermore, methods of development research, its problem, and its major challenges will be described in the second part of this chapter which will be described in this section.
5. Methods of Development Research
Methods of development research are not necessarily different from those in other research approaches. However, there are some specific features that are worth discussing here to further clarify the image of development research. The first one has to do with the central role of formative evaluation procedures in formative research. The second aspect refers to several typical methodological problems and dilemmas for development researchers.
5.1 Formative evaluation as key activity
formative evaluation holds a prominent place in development research, especially in formative research. The main reason for this central role is that formative evaluation provides the information that feeds the cyclic learning process of developers during the subsequent loops of a design and development trajectory. It is most useful when fully integrated in a cycle of analysis, design, evaluation, revision, et cetera, and when contributing to improvement of the intervention. However, a few typical characteristics of formative evaluation within the context of development research approaches deserve some elaboration.
Formative evaluation within development research should not only concentrate on locating shortcomings of the intervention in its current (draft) version, but especially generate suggestions in how to improve those weak points. Richness of information, notably salience and meaningfulness of suggestions in how to make an intervention stronger, is therefore more productive than standardization of methods to collect and analyze data. Also, efficiency of procedures is crucial. The lower the costs in time and energy for data collection, processing, analysis and communication, the bigger the chances on actual use and impact on the development process.
The basic contribution of formative evaluation is to quality improvement of the intervention under development. During development processes, the emphasis in criteria for quality usually shifts from validity, to practicality, to effectiveness (cf. Nieveen’s chapter 10 in this book). Validity refers to the extent that the design of the intervention is based on state-of-the-art knowledge (‘content validity’) and that the various components of the intervention are consistently linked to each other (‘construct validity’). Practicality refers to the extent that users (and other experts) consider the intervention as appealing and usable in ‘normal’ conditions. Effectiveness refers to the extent that the experiences and outcomes with the intervention are consistent with the intended aims.
5.2 Problems and Dilemmas in development research
In this section, Van Den Akker briefly describing some typical problems and dilemmas faced by researchers when doing development research. Some of that problems are:
- Tension in role division between development and research. A tension can easily arise between designer who are eager to pursue their ideals in creating innovative interventions and researchers who tend to critically seek for correctness of decisions and empirical proof of outcomes.
- Isolating ‘critical’ variables versus comprehensive and complex design. A typical difference between formative research and many other sorts of research is that one can hardly isolate, manipulate and measure separate variables in the same study. On the contrary, it is the very nature of formative research to investigate comprehensive interventions that deal with many interrelated elements at the same time which makes it very hard to apply.
- Generalization of findings. Since data collection in formative research is usually limited to small (and purposive) samples, efforts to generalize findings cannot be based on statistical techniques, focusing on generalizations from sample to population. Instead one has to invest in ‘analytical’ forms of generalization: readers need to be supported to make their own attempts to explore the potential transfer of the research findings to theoretical propositions in relation to their own context.
6. Major Challenges for Development Research
As a relatively new and upcoming research approach, Development research has its potentials , limitations and challenges for those who are interested in further exploration and improvement of its methodology.
A challenging trend for designers is the increasing prominence of prototyping approaches. Various questions arise: What does (rapid/evolutionary) prototyping imply for efficiency of the development process? Will it affect the balance between creative and systematic features of the approach? Does it reduce the relevance of preliminary investigations? To what extent does it influence the relationship between methodology (as prescribed in literature) and actual design activities in professional practices (can ‘theory’ keep up with ‘practice’, or will the gap even widen)?
Not only that, many challenges are also apparent with respect to evaluation methodology. What are appropriate tactics for increasing the information richness and efficiency of data collection procedures and instruments? How may the linkages between data collection, processing, and analysis be optimized? How can the communication about evaluation findings and the subsequent utilization for improvement of interventions be furthered? What are the most relevant indicators of quality, success and impact of interventions? What are promising approaches to further the generalizibility of research findings? How can the utilization of evaluation findings to design tasks in other settings be facilitated? Many useful suggestions and examples of such tactics are already offered in various publications (e.g. Miles and Huberman, 1994; Walker, 1992), but additional support is quite welcome.
An overall reflection is that research-based progress to expand and sharpen our knowledge on design and development is greatly enhanced through interdisciplinary approaches with purposive cross-fertilization between the many specialized subdomains in educational science and technology. Moreover, it is our own experience that joint development research efforts of professionals in various roles offer fine opportunities for professional learning and capacity building. Such activities have the potential to sometimes produce outcomes with ‘interocular’ significance: results that hit you between the eyes (Scriven, 1996).