Where I Am Right Now
Based on my self-assessment across five AEA domains, I placed myself at 2.3 out of 6.0, which reflects an early developing level of competence. This score represents an average across domains, but the distribution is more revealing than the number itself. My interpersonal competence scored a 4.0, while methodology was my lowest at 1.42. The remaining domains clustered around 2.0.
This pattern shows that I have strong relational instincts shaped by my professional experience, but more limited development in the technical and structural dimensions of evaluation. It also revealed something more significant about how I have been approaching my role. I tend to operate as a responder. I am effective at interpreting situations and supporting people in real time, but I have not structured how problems are defined, measured, or evaluated from the outset. This distinction between responding and designing has become the central insight guiding my reflection.
Strengths I Bring to Evaluations
One of my strongest areas is interpersonal competence, particularly in building relationships and navigating conversations across perspectives. In my role working with families in independent study, I often support parents trying to understand why their student may be struggling. This is not simply a matter of sharing information. It requires building trust, validating concerns, and helping families make sense of complex situations that lead to action.
This aligns with the AEA Competency 5.1, focused on fostering relationships that support evaluation use and engagement. According to Stevahn et al. (2005), interpersonal competence is foundational because evaluation takes place within human systems. My ability to connect with stakeholders and create shared understanding is a clear strength. This strength helps explain why I have been more effective in responsive roles than in designing structured evaluation processes.
A second area that emerged is my tendency to interpret experiences within context. I do not naturally view data or outcomes in isolation. When reviewing student performance, I consider factors such as access to curriculum, consistency of engagement, parent capacity, and emotional well-being. Performance reflects a system, not just an individual.
This aligns with AEA competencies related to interpreting findings within context (2.13) and responding to the uniqueness of evaluation settings (3.1). However, my self-assessment score in the context domain was a 2.0, which highlights an important tension. While contextual thinking is instinctive for me, I have not yet applied it systematically as an evaluator. The AEA framework expects deeper analysis of political, cultural, and organizational contexts, including surfacing assumptions and attending to systems-level dynamics. My instinct is strong, but my ability to apply it intentionally in formal evaluation practice is still developing, making this an emerging rather than developed strength.
Where I Need To Grow
My most significant growth area is methodology. With a score of 1.42, this reflects a gap in my ability to design credible evaluations. I am used to responding to data rather than designing how it is generated. For example, when a new intervention is introduced at my school, I can observe whether it appears effective and gather informal feedback. However, I do not begin with defined evaluation questions, establish criteria for success, or plan how evidence will be collected or analyzed.
This connects directly to AEA competencies such as identifying evaluation purposes (2.1), determining evaluation questions (2.2), and designing credible evaluations (2.3). Russ-Eft, Preskill, and Jordan (2024) emphasize that evaluation should be integrated from the beginning rather than added after implementation. This is where the gap between responder and designer is most visible in my practice.
A second growth area is planning and managing evaluation processes. I do not think about timelines, intended use, or how findings will be communicated and applied. My work often develops in response to immediate needs rather than through a structured plan. This aligns with AEA Competency 4.4, planning for evaluation use and influence. Evaluation that produces findings but does not inform decisions represents a breakdown in the process. Methodology focuses on what to do, while planning and management focuses on how to execute and ensure use. Both are necessary for more intentional evaluation practice.
What Challenged My Assumptions
One of the most surprising aspects of this self-assessment was how little evaluation is purely technical. I initially viewed evaluation as a neutral process focused on collecting and analyzing data. However, the competencies related to stakeholder engagement, cultural responsiveness, and promoting use challenged that assumption.
This became more visible through my work on the LCAP Advisory Committee. In that space, decisions are shaped by whose perspectives are included, how needs are interpreted, and how priorities are set. Evaluation is embedded in these dynamics. It is not separate from them. This experience helped me see that evaluators are not just measuring outcomes. They are shaping how those outcomes are defined, understood, and used. My initial model positioned evaluation as technical and neutral, while the competencies reframed it as relational, interpretive, and influenced by values and context.
How I’m Starting To Shift My Practice
Based on this reflection, there are several actions I plan to take to build my competence. First, I will practice designing small, structured evaluations within my current role. When new supports or interventions are introduced, I will begin by defining evaluation questions, identifying needed evidence, and determining how data will be collected before implementation.
Second, I will build my methodological knowledge through applied practice within the work I already do. I do not have the ability to design or lead formal evaluations, but I can experiment with small-scale tools like surveys or structured questions and pay attention to how the tool shapes the information I get. In areas like family communication or intervention support, I can distinguish between anecdotal feedback and intentionally collected evidence.
Third, I will develop a reusable evaluation planning template that includes purpose, questions, data sources, timeline, and intended use. This will include an equity-focused step that asks whose perspectives are represented and whose are missing. While I do not lead LCAP work, I can use this framework to strengthen the input I bring by identifying which stakeholders are present in conversations and where gaps exist.
One immediate next step I will take in Module 2 is to practice defining what I am trying to understand before defaulting to tools or available data. In my role, I often respond by gathering feedback or reviewing data, but I do not always start with a clear question. I want to shift that by identifying the purpose first, then choosing how to gather information that matches it. Evidence of growth will be noticing when I rely on available data instead of asking whether it answers the question. Even doing this inconsistently would signal a shift toward more intentional use of data.

