top of page

How to Conduct an Empirical Research Study in the Field of Psychology

The scientific method is ‘a set of procedures, guidelines, assumptions, and attitudes required for the organized and systematic collection, interpretation, and verification of data, and the discovery of reproducible evidence, enabling laws and principles to be stated or modified’ (American Psychological Association [APA], 2021). A crucial part of the scientific method is empirical research, which primarily uses American philosopher Charles Peirce’s method of science and relies on ‘experimentation and systematic observation rather than theoretical speculation’ to obtain knowledge (APA, 2021). Peirce described five ways of acquiring information: believing the authority, the method of tenacity, direct observation or experience, a priori method, and the scientific method (Feibleman, 1969). The latter, according to Peirce, includes a combination of observation and rational analysis that should test hypotheses and lead to knowledge that would be the same for everyone, irrespective of their frame of reference. Such a combination entails, according to Peirce, abduction (the generation of hypotheses based on observed phenomena), deduction (the derivation of specific predictions from the hypotheses), and induction (the testing of hypotheses through observation and experimentation to determine their validity) (Santaella Braga, 2019).

In psychology, empirical research is of undeniable importance. It is the cornerstone of psychology, providing a robust foundation for evidence-based practice by supplying the necessary evidence to support psychological theories and validate the effectiveness of therapies and interventions for clients. It bolsters the scientific credibility of the discipline through meticulous observation, experimentation, and data analysis, ensuring psychology is recognized and respected as a science. By rigorously testing hypotheses and validating theories, empirical research drives the advancement of knowledge, uncovering new insights into human behaviour and mental processes. This, in turn, guides informed decisions that shape policy, education, healthcare, and societal welfare. Moreover, it cultivates critical thinking and scepticism among psychologists, prompting them to critically evaluate assumptions and seek solid empirical backing for their claims.

Empirical research in psychology encompasses a series of methodical steps designed to gather and analyse data in order to understand psychological phenomena. The purpose of this essay is to outline these steps. The framework presented, in alignment with the scientific method, ensures the research is grounded in actual observation or experience, providing a reliable foundation for knowledge advancement in the field. However, it must be noted that most research endeavours do not entail a neat and linear progression from one step to the next. Overlaps, revisions, additions, and iterations are commonplace and sometimes even necessary in research practice. In the words of American physicist Wernher von Braun, ‘Research is what I am doing when I don’t know what I am doing’ (Wernher von Braun, 1957).

 

Formulating Research Questions & Hypotheses

Crafting a research question is complex, and often requires inspiration from numerous sources. These may encompass a sudden and creative idea (Daft, 1983), purposeful idea generation (Leong & Muccio, 2006; Novak, 2004), one’s passions, prevailing issues within society, interactions with others such as peers and mentors (Campbell et al., 1982), and a range of published works like primary scientific journal articles (Johnson & Christensen, 2008), textbooks (e.g., Gravetter & Forzano, 2009), conference proceedings, theses, dissertations (Leong et al., 2012), and media reports.

A research question makes a substantial and innovative contribution to the body of knowledge or a conversation among researchers about a phenomenon of interest (Daft, 1985; Huff, 2009; Petty, 2006). In agreement with Daft (1983) and Thomas’s (1974; cited in Daft, 1983) views on research, Leong et al. (2012) suggest that an innovative research question introduces uncertainty regarding the study’s results and is framed around this uncertainty, considering the likelihood of an outcome(s). Additionally, a research question should be pertinent to the academic community’s concerns and this is not a solitary pursuit (Campbell et al., 1982). Petty (2006) advised that once a research idea is conceived, it is essential to validate it by examining existing literature and seeking input from fellow academics engaged in the relevant discourse. Leong et al. (2012) put forth three elements that are crucial in evaluating significant research ideas: an understanding of the scholarly and theoretical literature; attention to the practical issues or the implications of fundamental theoretical research; and a theory that is foundational for consolidating the understanding of an issue, facilitating the emergence of generalizable insights or knowledge.

The research questions are then reframed as testable hypotheses. The APA (2021) defines a hypothesis as ‘an empirically testable proposition about some fact, behaviour, relationship, or the like, usually based on theory, that states an expected outcome resulting from specific conditions or assumptions.’ Descriptive studies often use exploratory hypotheses; e.g., a researcher might hypothesize that there are patterns in consumer behaviour during holiday seasons, without specifying a particular relationship between variables. Interpretive studies usually use conceptual or theoretical methods, guiding the exploration rather than statistical testing; e.g., a researcher might hypothesize that symbolic meanings associated with a religious ritual impact the individuals’ sense of identity. Studies attempting to establish a correlation or causation between two variables use simple hypotheses, which may be associative (for correlation studies) or causal (for studies attempting to establish a cause-effect relationship). Furthermore, these associative or causal hypotheses can be directional or non-directional. In more complex scenarios, studies may explore correlations or causations among multiple variables. Such studies employ complex hypotheses, which share subtypes with simple hypotheses. In correlational and experimental studies, researchers frame null, alternative, and operational hypotheses. The alternative hypothesis suggests that there is a relationship between variables; this can be stated in directional or non-directional terms. The null hypothesis, on the contrary, states that there is no relationship between said variables. An operational hypothesis proposes a relationship between variables in addition to clearly defining those variables in operational terms, i.e., how they will be measured or manipulated within the study. Through analysis, the researcher attempts to refute the null hypothesis, thereby logically supporting the veridicality of the alternative hypothesis.

 

Conducting a Review of the Literature

Despite its importance, having the knowledge of the breadth and depth of the literature on a topic of choice is a difficult feat. Academics depend on databases and indexing resources to navigate and understand the published literature. Ramirez and Foster (2023) suggest that the literature search strategy should be well-planned and in alignment with the purpose of the search in order to avoid inefficient and repetitive searching, unnecessary repetition in reviewing results, and wasted time. They propose the following steps to guide a literature review:

  1. Identify the search type (lookup, exploratory, selective, comprehensive), its objective, and expected outcome. For a review, choose the kind of review (narrative, systematic, structured) and formulate the question.

  2. Consider seeking guidance from a librarian and choosing appropriate software tools. Seeking consultations can enhance the efficiency of planning and searching, improve search skills, and contribute to a more thorough and systematic search process. The Systematic Review Toolbox is a database of review software that provides links and reviews to free and paid programs. Mendeley, Zotero, EndNote, and RefWorks are examples of citation management tools. Rayyan and SysRev are examples of study selection software, which facilitates citation organization by identifying important terms, enabling rapid labelling, and removing duplicate entries.

  3. Establish the search parameters. These include eligibility criteria, the type of study, particular outcomes, measures, or time of follow-up with participants, year of publication, language, disciplines, etc. The eligibility criteria are then translated into search concepts, which usually include population characteristics and association or intervention terms with a minimum of one other concept.

  4. Choose which databases to utilize for the search. This involves matching the topic to a database’s content coverage. Most bibliographic databases index and abstract publications, including journals, conferences, books, and dissertations. Thesaurus terms in databases vary by subject discipline. Some databases are freely accessible, while others are subscription-based. Subject databases focus on specific disciplines (e.g., APA PsycInfo for psychology, ERIC for education). 

  5. Look for reviews that are related to the topic. This allows for a more efficient overview of the literature while minimizing the effort required. It also helps identify unique research questions. Analysing existing reviews informs the scope of a review and provides valuable search terms and databases. Relevant databases for related reviews include APA PsycInfo, ERIC, and Medline. Additionally, specialized databases like Cochrane Reviews, Campbell Collaboration, and Joanna Briggs Institute focus solely on reviews. Protocol registries like Prospero describe ongoing systematic reviews.

  6. Craft a search strategy for the most pertinent database. This involves consolidating search concepts, limits, and thesaurus terms from the most relevant database based on the primary topic of interest. For each concept, it is essential to consider various synonyms derived from related reviews, articles, practitioner terminology, international usage, and historical terms if the concept has evolved. When searching the first database, it is recommended to explore the thesaurus terms associated with each concept. To create a comprehensive search, the researchers must combine keyword terms using OR with the corresponding thesaurus terms (when available). The final search string utilizes AND to combine the terms for each concept.

  7. Adapt your search strategy for use in additional databases. When transitioning from one database to another, it is advisable to maintain consistent keyword searches whenever possible, even though there might be variations in syntax across different interfaces. Moreover, for each additional database, the corresponding thesaurus terms that align with each concept should be identified.

  8. Incorporate grey literature, which encompasses unpublished works such as clinical registries, dissertations and theses, conference papers and posters, technical or government reports, and other white papers. The type of report to be searched should be selected based on their likelihood to yield studies on a given subject.

  9. Employ alternative searching methods, such as citation searching, contacting authors, and browsing.

  10. Finalize, record, and present the details of your search. For systematic reviews, a protocol listing all resources and search techniques should be created. Keep updating the search until manuscript submission. For exploratory searches, it is advisable to set a limit (e.g., by resource count or date). Document key elements, including database details, search terms, and citation counts. The PRISMA-Search standard must be followed for comprehensive documentation and transparency.

 

Philosophical Foundations: Epistemology & Ontology

Choosing an epistemological and ontological perspective is a fundamental part of conducting empirical research, as it shapes the entire approach to the study. Epistemology refers to the theory of knowledge, particularly with regard to its methods, validity, and scope, and is the investigation of what differentiates justified belief from opinion. When selecting an epistemological stance, researchers must consider how they believe knowledge can be acquired and what they consider valid knowledge. For instance, positivist epistemology suggests that knowledge is only valid when it is based on observable and measurable facts, thus aligning with quantitative research methods. Conversely, an interpretivist epistemology may hold that knowledge is subjective and best understood through qualitative measures (Grix, 2004).

Ontological choices involve assumptions about the nature of reality. A realist ontology, for instance, posits that there is an objective reality that can be known, while a relativist ontology assumes that reality is socially constructed and can vary between individuals and cultures (Snape & Spencer, 2003). The distinction and link between ontology and epistemology are contentious, especially among constructivist researchers who often reject the clear-cut separation between the two (Furlong & Marsh, 2010).

The choice of the ontological stance fundamentally influences the epistemological approach, which, in turn, guides and provides a rationale and a clear, logical framework for the selection of research methodology and methods (Al-Ababneh, 2020; Kincheloe & Berry, 2004). Clarity in the relationship between what can be researched (ontology), what we can know about it (epistemology), and how to go about acquiring it (methodology) is crucial for the quality of research and the credibility and defensibility of the research findings (Grix, 2004; Snape & Spencer, 2003).

Research Design & Methodology

Methodology refers to the techniques used to collect data and the procedures used to analyse them (APA, 2018). The APA (2018) defines research design as ‘a strategic plan of the procedures to be followed during a study in order to reach valid conclusions, with particular consideration given to participant selection and assignment to conditions, data collection, and data analysis.’ The selection of an appropriate research methodology is a critical decision in empirical research, and it should be guided by several interconnected factors. The research methodology should directly address the research question and hypotheses. Choosing the right methodology ensures the collection of relevant data that answer the research question effectively. Ontology and epistemology create a holistic view of knowledge. They influence how researchers position themselves in relation to knowledge and guide methodological choices. The methodology is also concerned with the type of data sought by the researchers. On this basis, research methods are broadly classified as qualitative, quantitative, and mixed-methods.

Qualitative Methods

The qualitative inquiry aims to collect and analyse non-numerical (descriptive) data, useful in understanding people's beliefs, experiences, attitudes, behaviour, motivations, and interactions (Pathak et al., 2013; Strauss & Corbin, 1990). Smith (1996) highlights that qualitative research’s primary strength lies in its ability to collect comprehensive data that penetrates the participant’s personal interpretation of their experience, granting researchers an intimate viewpoint.

A qualitative researcher can choose a data collection method from a variety of options, based on the goals of their research. These involve in-depth interviews with individuals or groups to gather their perspectives, experiences, and opinions on a given topic; focus group discussions led by a facilitator on a particular topic, where participants share their views, experiences, and opinions; observation and recording of behaviour and interactions of individuals or groups in a given setting; case studies for in-depth analysis of a particular individual, group, or organization over an extended period; document or discourse analysis to gain insight into a particular topic; visual data analysis (photographs, videos, drawings) to understand people’s experiences or perspectives as these data capture emotions, symbols, and non-verbal communication; and online data collected from social media platforms, forums, and online communities, which reveal public opinions, trends, and virtual interactions.

Qualitative data analysis tools each have unique approaches to understanding data: Grounded Theory is a systematic methodology that involves constructing theories through methodical data gathering and analysis. Thematic Analysis (TA) is a widely used method for identifying, analysing, and reporting patterns (themes) within data. Some psychologists consider TA as a fully-fledged method in itself, whereas others regard it as a foundational tool that underpins numerous other qualitative techniques (Willig, 2013). Interpretative Phenomenological Analysis (IPA) aims to explore in-depth individuals’ perceptions and understanding of their personal and social world. Narrative Analysis examines the stories or narratives that people tell to understand how they create meaning. Discourse Analysis investigates the use of language within texts and conversations to understand social interaction. Ethnomethodology studies the way people make sense of their everyday world. Ethnography is an observational approach that involves studying human cultures and societies in their naturalistic settings. Conversation Analysis focuses on the systematic study of the talk produced in everyday situations. These methods collectively represent a spectrum of tools that researchers can use to delve into the complexities of human experiences, interactions, and social constructs, each offering a lens through which to interpret the rich tapestry of qualitative data.

Despite providing in-depth insights into potentially esoteric topics or subjects, qualitative methods face several limitations. The small sample sizes typical of this approach, although beneficial for detailed exploration, hinder the generalizability of findings to broader populations. The subjectivity inherent in qualitative methods can introduce bias, as researchers’ personal beliefs and cultural backgrounds may affect data collection, interpretation, and analysis. This subjectivity also contributes to a lack of replicability, with different researchers potentially arriving at varying conclusions from the same data. Moreover, qualitative research is time-consuming, requiring extensive effort for data gathering and analysis, such as interviews, observations, and thematic coding, which can be a drain on resources. Finally, the difficulty in standardization presents a challenge, as qualitative methods do not have uniform protocols, leading to variability in how data are collected and analysed.

Quantitative Methods

Quantitative research is a methodological approach that emphasises the measurement and statistical analysis of data (Bryman, 2012). Developed from a deductive standpoint, it prioritises the verification of hypotheses, influenced by empirical and positivist philosophies (Bryman, 2012). It advocates for the systematic and objective examination of phenomena that can be observed in order to assess and comprehend the connections between them. For this, it employs various methods and techniques for quantification, demonstrating its widespread application as a research method across diverse scholarly fields (Babbie, 2010; Given, 2008; Muijs, 2010). Quantitative methods aim to create and employ mathematical models, theories, and hypotheses related to phenomena. Measurement, a key aspect of this approach, establishes the essential link between what is observed empirically and the mathematical formulation of relationships between quantities.

Quantitative research employs various data collection methods to gather numerical data that can be analysed statistically. Surveys involve standardised questions administered through questionnaires or interviews to collect participants’ responses. Experiments manipulate variables in controlled settings to observe their effects on other variables. Observational studies systematically observe and record natural phenomena without intervention. Both experiments and observational studies may be longitudinal (conducted for the same group of participants over long periods) or cross-sectional (data collection and analysis happen at a specific point in time). Secondary data analysis examines existing data from databases, records, or previous studies. Physiological measures record biological processes (e.g., heart rate, brain activity) to obtain data related to psychological states. Lastly, computerised tracking collects electronic data using digital tools or software to monitor behaviours or responses.

Quantitative data analysis methods, primarily statistical, are used to summarise, describe, and infer conclusions from the data. Descriptive statistics include measures of central tendency (mean, median, mode) and measures of variability (range, variance, standard deviation) to summarise and describe the features of a dataset. Inferential statistics encompass parametric techniques like t-tests, ANOVA, ANCOVA, regression tests, Pearson’s correlation, and non-parametric methods such as chi-square tests, Kruskal-Wallis Test, Mann-Whitney U Test, and Spearman’s Rank Correlation. These techniques are used to infer patterns, relationships, and predictions from the data. Factor Analysis is employed to identify underlying variables, or factors, that explain the pattern of correlations within a set of observed variables. Path Analysis is a statistical technique within multiple regression that assesses causal models by examining the relationships between one dependent variable and multiple independent variables. Structural Equation Modelling (SEM) integrates factor analysis with multiple regression to investigate the structural links between measured variables and underlying constructs. Time Series Analysis comprises methods that analyse sequences of data points collected or recorded at time intervals to identify trends, cycles, or seasonal variations in the data.

Despite their wide usage and applicability, quantitative methods may not be the most appropriate or effective when conducting exploratory research, when investigating subjective experiences and personal opinions, when exploring in-depth or complex topics, and when studying sensitive or controversial topics. Qualitative methods are generally more suited to such situations.

Mixed Methods

Mixed methods research refers to the integration of both qualitative and quantitative data, methods, methodologies, and paradigms within a single study or a collection of related studies. It can be considered a specific instance of multimethod research or multimethodology, which involves employing multiple data collection methods or research approaches coherently for a given research investigation or a set of interconnected studies.

According to Creswell and Plano Clark (2018), mixed methods research is characterized by five distinct features: (a) the collection and analysis of both qualitative and quantitative data; (b) the employment of rigorous methodologies; (c) the adoption of a mixed methods research design; (d) the implementation of theoretical frameworks or philosophical paradigms; and (e) the amalgamation of quantitative and qualitative data, methodologies, or findings. Mixed methods research is categorized into four classes (Johnson et al., 2007):

  1. Quantitatively Driven Approaches are primarily quantitative studies enhanced by qualitative data to provide more comprehensive insights. Emphasis is on quantitative quality, but qualitative data must be high-quality as well.

  2. Qualitatively Driven Approaches are mainly qualitative studies supplemented by quantitative data to enrich the findings. Qualitative quality is prioritised, with the necessity for high-quality quantitative data (Hesse-Biber & Johnson, 2015).

Greene and Caracelli (1997) propose the term ‘embedded design’ and Creswell, Plano Clark, et al. (2003) define it as a type of mixed methods design where one data set (qualitative or quantitative) supplements and is secondary to another data set. This definition encompasses both of the above approaches defined by Johnson et al. (2007).

  1. Interactive/Equal Status Designs give equal weight to quantitative and qualitative methods, often utilising a team of experts. They emphasise a balance of quality criteria across methods, guided by the principle of multiple validities legitimation (Johnson & Christensen, 2014; Onwuegbuzie & Johnson, 2006).

  2. Mixed Priority Designs: The main results stem from the combined analysis of qualitative and quantitative data (Creamer, 2018).

The advocacy for mixed methods research as a strategic approach in intervention and research is grounded in four key observations: Firstly, narrow perspectives can often distort reality, hence adopting multiple viewpoints or paradigms can lead to a more comprehensive and accurate understanding of the world. Secondly, the various strata of social research, such as biological, cognitive, and social, each benefit from different methodological strengths; employing a combination of these can yield a more lucid and robust explanation of social phenomena. Thirdly, the practical application of mixed methodologies is already prevalent in addressing specific issues, although this practice lacks sufficient theoretical underpinning. Lastly, the essence of multimethodology is in harmony with pragmatic principles, advocating for its suitability and effectiveness.

 

Sample Size Selection & Sampling Methods

The sample size hinges on the researcher’s objectives. One aim might be to determine the presence and/or direction of a specific effect, while another could be to quantify how significant the effect is. In certain cases, both objectives may be relevant. Although the sample size is usually planned prior to data collection, it is crucial when using extant data to infer whether the research questions can be appropriately addressed with the available sample size (Kelley et al., 2023).

For the former objective, Kelley et al. (2023) recommend that an a priori power analysis be performed to determine a suitable sample size. This approach provides the sample size from a fixed-n perspective, which involves planning the sample size in advance based on fixed parameters, such as a predetermined number of participants. On the other hand, for the latter goal, the AIPE (Accuracy in Parameter Estimation) method is recommended. Although traditionally a fixed-n method, recent developments allow for sequential estimation approaches as well, which eschew a predetermined sample size and instead, data are continuously evaluated as they are collected, and the decision to stop further sampling is made based on a predefined stopping rule once significant results are observed. 

After ascertaining the sample size, the researcher must employ a suitable sampling strategy to recruit participants for the empirical research. Several key factors must be considered when deciding upon a sampling strategy: the research goals and objectives (Babbie, 2016), population characteristics, such as demographic variables (Creswell, 2014), sampling frame availability (Khalifa, 2020), research design (Khalifa, 2020), etc.

The three major classifications of sampling methods are random, non-random, and mixed. Random sampling involves selecting participants from a population in a manner that ensures each member has an equal probability of being selected, without any specific pattern or bias. This encompasses techniques such as simple random sampling (random selection from a population), systematic sampling (selection of members of the population at regular intervals), cluster sampling (random selection from clusters within the population), and stratified random sampling (random selection from strata within the population created based on shared characteristics), which may be proportionate or disproportionate. In non-random sampling, participants are selected based on criteria other than random chance, often using convenience or specific characteristics relevant to the research study. This comprises the following sampling types: quota sampling (selection based on predetermined quotas for specific characteristics), convenience/opportunity sampling (selection based on participant availability or accessibility), purposive sampling (selection based on representativeness of the population), double sampling (drawing a sample from another sample), snowball sampling (participants refer others), and criterion-based sampling (selection based on specific criteria relevant to the research question). Mixed sampling is a strategy that combines different sampling methods, such as random and non-random, to enhance the robustness and representativeness of the research sample.

 

Data Collection

Data collection refers to systematically gathering and measuring information related to specific variables within a defined system. This process allows researchers to address relevant questions and assess outcomes based on the collected data. The various data collection methods in both qualitative and quantitative research have been outlined in the section on research design and methodology. Researchers usually employ data collection tools when gathering data: standardised psychometric tests, interview guides, observation protocols, questionnaires, biophysiological measures, neuroimaging techniques, rating scales, coding systems, checklists, and so on. The researcher may also use audio or video recording, or field notes during data collection methods such as interviews, observations, and focus group discussions.

In psychology, as in other disciplines, researchers are also concerned about data integrity, defined as ‘the representational faithfulness of the information to the condition or subject matter being represented by the information’ (Boritz, 2005, pp. 4). In research, it also involves ensuring that data are accessible for validation. Data integrity matters significantly because scientific research informs critical aspects of society, including policy, healthcare, and education. Researchers must address two key approaches to safeguard data integrity and uphold scientific validity (Most et al., 2003). Quality assurance includes actions taken before data collection to ensure data reliability and trustworthiness. Such actions involve rigorous planning, standardised protocol development, training, and preventing errors when entering the dataset. The second approach, quality control, includes actions taken during and after data collection to monitor and maintain data quality. This involves checks, audits, validation procedures, and the prompt detection and correction of errors, such as individual staff or site performance problems, errors in individual data items, systematic errors, protocol violations, and fraud or scientific misconduct.

Data quality criteria have been developed by scholars for qualitative, quantitative, and mixed-methods research (Bryman et al., 2008; Cavaleri et al., 2019; Swanborn, 1996; Westerman, 2006; Yadav, 2022). Examples of quality criteria for qualitative research are self-reflexivity, coherence, relevant and timely topic, credibility, ethics, etc. Examples of quality criteria for quantitative research are generalisability, reliability, validity, replicability, etc. Researchers are therefore advised to keep such quality checks in mind when conducting their research. Regardless of the domain or whether data are quantitative or qualitative, precise data collection is crucial for maintaining research integrity. Choosing suitable data collection tools (whether existing, modified, or newly developed) and providing clear instructions for their correct usage helps minimise errors.

 

Data Analysis

Data analysis involves examining, cleaning, transforming, and constructing models of data to uncover valuable insights, draw informed conclusions, and aid in making decisions (“Transforming Unstructured Data into Useful Information,” 2014). Data are collected and analysed to answer questions, test hypotheses, or disprove theories (Judd & McCleland, 1989). A wide variety of data analysis techniques is used in psychological research. These have been covered in the section on research design and methodology. For quantitative data analysis, software tools such as SPSS (Statistical Package for the Social Sciences), JASP (Just Another Statistical Program), STATA, MATLAB (MATrix LABoratory), and R are commonly used. For qualitative data analysis, MAXQDA, NVivo, ATLAS.ti, QDA Miner, and Quirkos are well-known. The choice of software depends on the particular research needs, data type, and preferences of the researcher. Each tool has its strengths and may be preferred for different types of analyses or research designs.

When data are first collected, they need to be processed or arranged for analysis (Nelson, 2014). For example, this might involve organising the data into rows and columns in a tabular format (referred to as structured data) for further analysis, often using a spreadsheet or statistical software (Schutt & O’Neil, 2013). In qualitative analysis, this can mean transcribing interviews or categorising open-ended survey responses. Thereafter, data cleaning is conducted, necessarily due to entry and storage issues (Bohannon, 2016). It involves correcting incomplete, duplicate, or erroneous data (Bohannon, 2016; Jeannie Scruggs et al., 2010). It includes tasks like record matching, error identification, deduplication, and column segmentation (“Data Cleaning,” 2024). Analytical techniques and threshold reviews detect problems, while data type dictates cleaning methods, such as outlier removal for quantitative data or spell-checking for text (Goodman, 1998; Peleg et al., 2011).

Post data cleaning, researchers employ exploratory data analysis (EDA) to start deciphering the information within the data (Davis et al., 2015). This exploration might lead to more data cleaning or further data requests, initiating iterative cycles. Generating descriptive statistics like the average or median helps comprehend the data (Murray, 2013). Additionally, data visualization such as histograms, pie charts, and scatter plots allows analysts to view the data graphically, providing further insight into the underlying information (Schutt, 2013). EDA in qualitative data is about looking for patterns or themes in the data, which can be done through coding and thematic analysis. 

Finally, mathematical formulae or models are used in inferential statistics to recognise relationships among variables; e.g., using correlation or causation (Ben-Ari, 2012). Broadly speaking, models can be created to assess a particular variable using other variables from the dataset. The accuracy of the model determines the extent of any remaining error, which is represented as Data = Model + Error (Evans et al., 2017; Judd & McClelland, 1989). Additionally, researchers may construct descriptive models of the data to simplify the analysis and communicate the results (Judd & McClelland, 1989). 

 

Interpreting Results

The process of interpreting results in research involves analysing and deriving meaning from the data produced during a study (Abbadia, 2023). Researchers examine patterns, trends, and correlations within the data to develop reliable findings and draw meaningful conclusions. Proper interpretation is critical for understanding the relevance of the findings, ensuring the reliability and trustworthiness of the conclusions, shaping future research, and application of the findings to real-world scenarios. 

The qualitative researcher is generally advised to integrate the narratives of their participants, acknowledge their subjectivity, utilise participants’ quotes, remain sensitive to context, and derive conclusions that are firmly grounded in the data. For quantitative research, it has been argued that the pursuit of statistical significance and the reliance on Null Hypothesis Significance Testing can distort the accumulated knowledge in psychology (Howitt & Cramer, 2020). This form of testing only answers the likelihood of observing an effect of a certain magnitude by chance alone, assuming the null hypothesis of no effect is accurate. It serves to prevent false positives, which occur when random sampling anomalies lead to conclusions about differences, trends, or associations in a researcher’s data that do not actually occur in the larger population from which the sample was drawn. Therefore, apart from statistical significance, Howitt and Cramer (2020) recommend the consideration of confidence intervals, effect sizes, and statistical power analysis as providers of additional information. Finally, they advise that ‘It is not a good strategy to cherry pick the best bits of the data in the hope that this convinces reviewers that the study should be published.’ (Howitt & Cramer, 2020, pp. 304)

 

Ethical Considerations

Ethical considerations in psychological research are paramount, ensuring the protection of participants by prioritizing their safety and rights, minimizing harm, and maintaining confidentiality (Grace et al., 2020). Informed consent is a cornerstone, requiring that participants are fully aware of the study’s purpose, procedures, risks, and benefits, thereby upholding autonomy and transparency (Cherry, 2023). Researchers must balance the scientific goals with participant welfare, making ethical decisions that reflect the pursuit of knowledge alongside ethical responsibilities. This includes avoiding harm and exploitation, particularly of vulnerable populations, and being mindful of power dynamics and cultural differences. Finally, transparency and reporting are essential, necessitating the clear communication of methods, results, and ethical considerations, as well as the disclosure of any conflicts of interest.

Ethical guidelines for research involving non-human subjects are essential to ensure the humane and ethical treatment of animals used in scientific studies. These guidelines, informed by principles emphasizing animal welfare and rights, are enforced by regulatory bodies and ethical committees. Non-maleficence (Tumility et al., 2018), beneficence (Tumility et al., 2018), and voluntary participation (Van Patter & Blattner, 2020) also guide research with non-human participants. The 3Rs principle—Replacement, Reduction, and Refinement—forms the foundation of ethical animal research. Researchers strive to: replace animal use with alternative methods whenever possible; reduce the number of animals needed for research objectives; and refine procedures to minimize pain, suffering, and distress to animals.

While animals cannot provide informed consent, researchers must obtain proper authorization and justify animal use. Humane treatment is paramount, ensuring proper housing, care, and management to maintain animal well-being. Procedures should minimize discomfort, distress, and pain. Veterinary care is also crucial. Competent individuals, often in consultation with a veterinarian, make decisions regarding animal welfare. Ethical review by an Institutional Animal Care and Use Committee (IACUC) evaluates proposed research, considering both ethical and scientific aspects.

Researchers and personnel must receive adequate training in animal care and handling. Understanding species-specific needs and ethical implications is essential. Providing environmental enrichment allows animals to exhibit natural behaviours, enhancing welfare and data quality. When necessary, euthanasia must be rapid and painless, following approved methods. Transparency is vital: researchers must report findings, including any adverse effects on animals and acknowledge the power differentials present in research with non-human animals (Malone et al., 2010). Publications should detail animal welfare considerations and the ethical review process. Finally, researchers must comply with legal requirements related to non-human participants. The Guide for the Care and Use of Laboratory Animals serves as a framework, emphasizing scientific, ethical, and legal aspects of animal research.

 

Conclusion

Conducting an empirical research study in psychology involves several key steps that are essential for advancing our understanding of human behaviour and mental processes. These steps include formulating research questions, developing testable hypotheses, collecting and analysing data, and drawing conclusions based on empirical evidence. It is crucial to follow the scientific method, which emphasizes systematic observation and experimentation to ensure the reliability and validity of research findings.

Moreover, it must be recognised that research is often an iterative process, with overlaps, revisions, and additions being common in practice. Replication of studies is also vital to confirm the robustness of findings and ensure the generalizability of results. By replicating studies, researchers can strengthen the evidence base in psychology and build a more comprehensive understanding of psychological phenomena. Unfortunately, journals do not always encourage replication studies, which leaves some ambiguity about progress in research areas (Howitt & Cramer, 2020).

Empirical research plays a fundamental role in advancing psychological science by providing a solid foundation for evidence-based practice and theory development. Through rigorous testing of hypotheses and validation of theories, empirical research drives knowledge advancement, uncovers new insights, and informs decision-making in various fields such as policy, education, healthcare, and societal welfare. It fosters critical thinking among psychologists, encouraging them to question assumptions and seek empirical support for their claims.

In essence, the value of empirical research in psychology cannot be overstated. It is through empirical research that we gain valuable insights into human behaviour, contribute to the growth of psychological knowledge, and ultimately make meaningful contributions to society. By upholding the principles of empirical research and embracing its iterative nature, we can continue to push the boundaries of psychological science and make a positive impact on individuals and communities worldwide.

 

References

  • Abbadia, J. (2023, July 28). Understanding the Interpretation of Results in Research. Mind the Graph Blog. Retrieved from Mind the Graph Blog.

  • Al-Ababneh, M. M. (2020). “Linking Ontology, Epistemology and Research Methodology.” Social Phenomena, 8(1), 1-15.

  • American Psychological Association. (2018, April 19). Methodology - Updated on 04/19/2018. Retrieved May 06, 2024, from https://dictionary.apa.org/methodology

  • American Psychological Association. (2018, April 19). Research design - Updated on 04/19/2018. Retrieved May 06, 2024, from  https://dictionary.apa.org/research-design

  • American Psychological Association. (2021). Empirical method. In APA Dictionary of Psychology. Retrieved from https://dictionary.apa.org/empirical-method

  • American Psychological Association. (2021). Hypothesis. In APA Dictionary of Psychology. Retrieved from https://dictionary.apa.org/hypothesis

  • American Psychological Association. (2021). Scientific method. In APA Dictionary of Psychology. Retrieved from https://dictionary.apa.org/scientific-method

  • Babbie, E. R. (2016). The Practice of Social Research. Cengage Learning.

  • Babbie, Earl R. (2010). The practice of social research (12th ed.). Belmont, Calif: Wadsworth Cengage.

  • Ben-Ari, M. (2012). First-Order Logic: Formulas, Models, Tableaux. In Mathematical Logic for Computer Science (pp. 131–154). London: Springer London.

  • Berger, R. (2015). Now I see it, now I don’t: Researcher’s position and reflexivity in qualitative research. Qualitative Research, 15(2), 219-234.

  • Bohannon, J. (2016, February 24). Many surveys, about one in five, may contain fraudulent data. Science.

  • Boritz, J. E. (2005). IS practitioners' views on core concepts of information integrity. International Journal of Accounting Information Systems, 6(4), 260-279.

  • Bryman, A., Becker, S., & Sempik, J. (2008). Quality criteria for quantitative, qualitative and mixed methods research: A view from social policy. International journal of social research methodology, 11(4), 261-276.

  • Bryman, Alan (2012). Social research methods (4th ed.). Oxford: Oxford University Press.

  • Campbell, J. P., Daft, R. L., & Hulin, C. L. (1982). What to study: Generating and developing research questions. Sage.

  • Cavaleri, R., Bhole, S., & Arora, A. (2019). Critical appraisal of quantitative research. Handb Res Methods Heal Soc Sci, 1027-49.

  • Cherry, K. (2023). Informed Consent in Psychology Research. Verywell Mind.

  • Creamer, Elizabeth G. (2018). An introduction to fully integrated mixed methods research. Los Angeles, CA: SAGE.

  • Creswell, J. W. (2014). Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. Sage Publications.

  • Creswell, J. W., & Plano Clark, V. L. (2018). Designing and conducting mixed methods research (3rd ed.). SAGE.

  • Creswell, J. W., Plano Clark, V. L., Gutmann, M., & Hanson, W. (2003). Advanced mixed methods research designs. In A. Tashakkori & C. Teddlie (Eds.), Handbook of mixed methods in social and behavioral research (pp. 209–240). Thousand Oaks, CA: Sage.

  • Daft, R. L. (1983). Learning the craft of organizational research. Academy of Management Review, 8(4), 539–546. https://doi.org/10.2307/258255

  • Daft, R. L. (1985). Why I recommend that your manuscript be rejected, and what You can do about it. In P. Frost & L. L. Cummings (Eds.), Publishing in the organizational sciences (pp. 164–182). Richard D. Irwin.

  • Davis, S., Pettengill, J. B., Luo, Y., Payne, J., Shpuntoff, A., Rand, H., & Strain, E. (2015, August 26). CFSAN SNP Pipeline: An automated method for constructing SNP matrices from next-generation sequence data. PeerJ Computer Science, 1, e20. doi:10.7717/peerj-cs.20/supp-1.

  • Evans, M. V., Dallas, T. A., Han, B. A., Murdock, C. C., Drake, J. M., & Brady, O. (Ed.). (2017, February 28). Figure 2. Variable importance by permutation, averaged over 25 models. eLife, 6, e22053. doi:10.7554/elife.22053.004.

  • Feibleman, J. K. (1969). An introduction to the philosophy of Charles S. Peirce. MIT Press.

  • Furlong, J., & Marsh, D. (2010). A Skin Not a Sweater: Ontology and Epistemology in Political Science. In D. Marsh & G. Stoker (Eds.), Theory and Methods in Political Science (pp. 184-211). Palgrave Macmillan

  • Garber, J. S., Gross, M., & Slonim, A. D. (2010). Avoiding common nursing errors. Wolters Kluwer Health/Lippincott Williams & Wilkins.

  • Given, Lisa M. (2008). The SAGE Encyclopedia of Qualitative Research Methods. Los Angeles: SAGE Publications. 

  • Goodman, L. E. (1998). Judaism, human rights, and human values. Oxford University Press.

  • Grace, B., Wainwright, T., Solomons, W., Camden, J., & Ellis-Caird, H. (2020). How do clinical psychologists make ethical decisions? A systematic review of empirical research. Clinical Ethics, 15(4), 213–224.

  • Greene, J. C., & Caracelli, V. J. (1997). Defining and describing the paradigm issue in mixed-method evaluation. New directions for evaluation, 74, 5-17.

  • Gravetter, F. J., & Forzano, L. B. (2009). Research methods for the behavioral sciences (3rd ed.). Wadsworth Cengage Learning.

  • Grix, J. (2004). The Foundations of Research. Palgrave Macmillan.

  • Haig, B. D. (2013). The philosophy of quantitative methods. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods in Psychology, Vol. 1. Oxford University Press.

  • Hesse-Biber, S. N., & Johnson, R. B. (2015). The Oxford handbook of multimethod and mixed methods research inquiry. Oxford.

  • Howitt, D., & Cramer, D. (2020). Research methods in psychology. Pearson.

  • Huff, A. S. (2009). Designing research for publication. Sage Publications.

  • Johnson, B., & Christensen, L. (2008). Educational research: Quantitative, qualitative, and mixed approaches (3rd edition). Sage

  • Johnson, R. B., & Christensen, L. B. (2014). Educational Research: Quantitative, Qualitative, and Mixed Approaches (5th ed.). Los Angeles, CA: SAGE.

  • Johnson, R. B., Onwuegbuzie, A. J., & Turner, L. A. (2007). Toward a definition of mixed methods research. Journal of Mixed Methods Research, 1(2), 112–133. doi:10.1177/1558689806298224.

  • Judd, C., & McCleland, G. (1989). Data Analysis. Harcourt Brace Jovanovich.

  • Kelley, K., Anderson, S. F., & Maxwell, S. E. (2023). Sample-size planning.

  • Khalifa, M. (2020). What are sampling methods and how do you choose the best one? Students 4 Best Evidence.

  • Kincheloe, J., & Berry, K. (2004). Rigour and Complexity in Educational Research: Conceptualizing the Bricolage. Open University Press.

  • Leong, F. T. L., & Muccio, D. J. (2006). Finding a research topic. In F. T. L. Leong & J. T. Austin (Eds.), A guide for graduate students and research assistants (2nd ed., pp. 23–40). Sage.

  • Leong, F. T., Schmitt, N., & Lyons, B. J. (2012). Developing testable and important research questions.

  • Malone, N. M., Fuentes, A., & White, F. J. (2010). Ethics commentary: Subjects of knowledge and control in field primatology. American Journal of Primatology, 72, 779-784.

  • Maxwell, J. A. (2012). A realist approach for qualitative research. SAGE Publications.

  • Most, M. M., Craddick, S., Crawford, S., Redican, S., Rhodes, D., Rukenbrod, F., & Laws, R. (2003). Dietary quality assurance processes of the DASH-Sodium controlled diet study. Journal of the American Dietetic Association, 103(10), 1339-1346.

  • Muijs, D. (2010). Doing quantitative research in education with SPSS (2nd ed.). Los Angeles, CA.

  • Murray, D. G. (2013). Tableau your data! : Fast and easy visual analysis with Tableau Software. J. Wiley & Sons.

  • Nelson, S. L. (2014). Excel data analysis for dummies. Wiley.

  • Novak, J. D. (2004). The theory underlying concept maps and how to construct them. https://cmap.ihmc.us/ docs/theory-of-concept-maps

  • Onwuegbuzie, A. J., & Johnson, R. B. (2006). The “validity” issue in mixed methods research. Research in the Schools, 13(1), 48–63.

  • Pathak, V., Jena, B., & Kalra, S. (2013). Qualitative research. Perspectives in clinical research, 4(3), 192.

  • Peleg, R., Avdalimov, A., & Freud, T. (2011, March 23). Providing cell phone numbers and email addresses to patients: The physician’s perspective. BMC Research Notes, 4(1), 76.

  • Petty, R. E. (2006) The research script: One researcher’s view. In F. T. L. Leong & J. T. Austin (Eds.), Psychology research handbook: A guide for graduate students and research assistants (2nd ed., pp. 465–480). Sage.

  • Ramirez, D., & Foster, M. J. (2023). Searching with a purpose: How to use literature searching to support your research.

  • Santaella Braga, L. (2019). C. S. Peirce’s Abduction, Induction, and Deduction. In: Peters, M. (eds) Encyclopedia of Educational Philosophy and Theory. Springer, Singapore. https://doi.org/10.1007/978-981-287-532-7_575-1

  • Schutt, R., & O’Neil, C. (2013). Doing Data Science. O’Reilly Media.

  • Snape, D., & Spencer, L. (2003). The Foundations of Qualitative Research. In J. Ritchie & J. Lewis (Eds.), Qualitative Research Practice: A Guide for Social Science Students and Researchers (pp. 1-23). SAGE Publications.

  • Strauss A, Corbin J. (1990). Basics of qualitative research. California: Sage.

  • Swanborn, P.G. (1996). A common base for quality control criteria in quantitative and qualitative research. Qual Quant 30, 19–35.

  • Toomela, A. (2010). Quantitative Methods in Psychology: Inevitable and Useless. Frontiers in Psychology, 1, 29.

  • Tumilty, E., Smith, C. M., Walker, P., & Treharne, G. (2018). Ethics unleashed: Developing responsive ethical practice and review for the inclusion of non-Human Animal participants in qualitative research. In R. Iphofen & M. Tolich (Eds.), The SAGE handbook of qualitative research ethics (pp. 396-410). Thousand Oaks, CA: Sage.

  • Van Patter, L. E., & Blattner, C. (2020). Advancing ethical principles for non-invasive, respectful research with nonhuman animal participants. society & animals, 28(2), 171-190.

  • Wernher von Braun. (1957). Work, Society, and Culture. The New York Times.

  • Westerman, M. A. (2006). What counts as “good” quantitative research and what can we say about when to use quantitative and/or qualitative methods?. New ideas in psychology, 24(3), 263-274.

  • Willig, C. (2012). Perspectives on the epistemological bases for qualitative research. In H. Cooper et al. (Eds.), APA handbook of research methods in psychology, Vol. 1. Foundations, planning, measures, and psychometrics (pp. 5–21). American Psychological Association.

  • Willig, C. (2013). Introducing qualitative research in psychology (3rd Edition). Maidenhead: Open University Press.

  • Yadav, D. (2022). Criteria for good qualitative research: A comprehensive review. The Asia-Pacific Education Researcher, 31(6), 679-689.

This article was written by Ishwar Kukreja, who is a part of the Global Research Internship Program (GRIP)

bottom of page