Educational Resource

Red Flags in Custody Evaluations

Last Updated: February 2026

What attorneys and families should know about the methodological, testing, and procedural problems that undermine the reliability of child custody evaluations, and what can be done when an evaluation falls short.

Request an Evaluation Critique Attorney Services
For Educational Purposes: This guide is intended to help attorneys and families recognize quality concerns in custody evaluations. It does not constitute legal advice or psychological consultation. Questions about a specific evaluation should be directed to qualified legal and psychological professionals.

Why Evaluation Quality Matters

A child custody evaluation carries significant weight in family court proceedings. Judges rely on evaluators' recommendations when making determinations about parenting time, decision-making authority, and children's wellbeing. When an evaluation is methodologically sound, it provides the court with a valuable, evidence-based perspective. When it is not, an unreliable evaluation can cause serious harm: to the children, to the parties, and to the integrity of the proceeding.

Not all custody evaluations are created equal. Evaluators vary widely in their training, methodology, thoroughness, and adherence to professional standards. Some evaluations are rigorous, thorough, and well-reasoned. Others contain significant gaps, procedural shortcuts, or conclusions that the evaluator's own data simply does not support.

Knowing what to look for, in the methodology, the testing approach, the report itself, and the procedures used, is essential for attorneys who want to effectively challenge a flawed evaluation and for families who believe the process failed them.

Category 1

Methodological Red Flags

The foundation of a reliable custody evaluation is a sound methodology: how data was gathered, from whom, and over what period of time. Problems at this level compromise everything built on top of them.

Inadequate Collateral Contacts

A thorough evaluation draws on multiple independent sources: teachers, therapists, physicians, extended family, and others who have direct knowledge of the children and each parent. When collateral contacts are minimal, skewed toward one side, or consist primarily of self-selected individuals provided by one party, the evaluator's data is incomplete and potentially one-sided.

Over-Reliance on a Single Data Source

No single piece of information, a parent interview, a psychological test result, a child's statement, should drive the conclusions of an evaluation. Credible methodology involves triangulating data across multiple sources and resolving discrepancies through further inquiry. When conclusions rest on a single source, the foundation is inherently fragile.

Failure to Review Records

Relevant records, including medical, school, mental health, child protective services, and prior court proceedings, often contain objective information that either supports or contradicts what parents report. An evaluator who does not review records relevant to the case is working with an unnecessarily restricted information base, and conclusions drawn without that context are correspondingly less reliable.

Rushed or Abbreviated Evaluation

A thorough custody evaluation takes time. When evaluators compress the process, conducting only one or two interviews per parent, spending minimal time with children, or completing the evaluation over a very short period, they cannot reasonably gather the breadth of information needed to support reliable conclusions about complex family dynamics.

Failure to Address Raised Concerns

When a party raises specific concerns during the evaluation, such as allegations of domestic violence, substance use, mental health issues, or parenting deficiencies, a rigorous evaluator investigates those concerns through collateral contacts, records review, and targeted inquiry. An evaluation that mentions concerns but does not investigate or meaningfully address them falls short of what the task requires.

No Explanation of Methodology

A well-prepared report explains the methodology the evaluator used: what data sources were consulted, how information was weighted, and how competing accounts were reconciled. When a report simply announces conclusions without explaining the process behind them, there is no basis for the court, or opposing counsel, to assess whether those conclusions are well-founded.

Category 2

Psychological Testing Red Flags

Psychological testing is a specialized skill. Errors in test selection, administration, or interpretation can produce meaningless or misleading results, and those results can still find their way into a report as if they were authoritative.

Inappropriate Test Selection

Psychological tests are developed and validated for particular purposes and populations. Using a test outside its validated purpose, such as applying clinical instruments to make forensic recommendations, or using instruments without appropriate normative data for the population being assessed, is a methodological error that undermines the value of the results.

Failure to Use Validity Scales

Many well-designed psychological instruments include validity scales that detect response distortion: whether a test-taker is minimizing problems, exaggerating symptoms, or responding in a random or inconsistent manner. An evaluator who does not use instruments with validity indicators, or who uses them but does not address what they reveal, cannot assess whether the test results are meaningful.

Over-Reliance on a Single Instrument

No single psychological test provides a complete or definitive picture of a person's functioning. A thorough psychological battery draws on multiple instruments that assess different domains, personality, cognition, psychopathology, and validity, and interprets them in combination. Heavy reliance on one instrument, while ignoring others that might complicate the narrative, is a warning sign.

Misinterpretation of Test Results

Scoring a psychological test correctly is only part of the task. Interpretation requires understanding what elevated or depressed scores actually mean, what they do not mean, and how to contextualize them given the individual's history and the circumstances of the evaluation. Errors in interpretation, treating a single elevated scale as diagnostic, for example, can lead directly to flawed conclusions.

Ignoring Relevant Test Findings

When test results suggest personality disorder features, significant psychopathology, or response distortion, those findings should be addressed in the report and incorporated into the evaluator's reasoning. An evaluator who administers tests, obtains significant results, and then makes no mention of them, or dismisses them without explanation, raises serious questions about whether the findings were inconvenient for a predetermined conclusion.

Testing Without Contextual Integration

Test results do not exist in isolation. A competent evaluator integrates test data with interview observations, collateral information, and records review, examining convergence and resolving discrepancies. When a report presents test scores as stand-alone findings without integrating them into the broader clinical picture, the interpretation is incomplete.

Category 3

Bias Indicators

Evaluator bias, whether conscious or not, can shape an evaluation from the first interview to the final recommendation. Recognizing where bias may have entered the process is essential for mounting an effective challenge.

Recommendations Not Supported by the Evaluator's Own Data

One of the clearest indicators of a problematic evaluation is a recommendation that cannot be traced to the data the evaluator actually gathered. If an evaluator recommends sole custody for one parent but the report documents no significant parenting deficiencies in the other, or recommends supervised visitation without documenting specific safety concerns, the reasoning gap points to either poor methodology or something else driving the conclusion.

Dismissing Domestic Violence or Abuse Allegations

Allegations of domestic violence or child abuse must be investigated, not simply noted and set aside. An evaluator who dismisses such allegations without explaining the investigative steps taken, including review of police reports, protective order records, medical records, and relevant collateral contacts, has not performed a thorough evaluation. Dismissal without investigation is particularly concerning when documented evidence exists.

Failure to Address Personality Disorder Dynamics

High-conflict custody cases frequently involve personality disorder features in one or both parties. These dynamics affect how individuals present during the evaluation, how they characterize the other parent, and how they interact with their children. An evaluator who does not assess for personality disorder features, or who obtains testing results suggesting such features and ignores them, may have missed the most important psychological variable in the case.

Confirmation Bias

Confirmation bias occurs when an evaluator forms an early impression and then gathers or weighs information in ways that confirm rather than test that impression. Signs include disproportionate interview time with one parent, more extensive collateral contact on one side, selective citation of information that supports one party, and failure to seek out or acknowledge information that is inconsistent with the evaluator's emerging conclusion.

Differential Characterization of the Parties

When a report consistently describes one parent in warm, positive terms and the other in neutral or critical terms, without the underlying data clearly supporting that difference, the language itself suggests that the evaluator's lens may not have been balanced from the start. Evaluators who have formed strong positive or negative impressions of a party may allow those impressions to color their entire analysis.

Accepting One Party's Narrative Without Verification

Both parties in a custody dispute have strong incentives to present information in the light most favorable to themselves. A competent evaluator treats all self-reported information as requiring independent verification before being incorporated into conclusions. When a report reflects one party's account as fact, without noting contradictory accounts or explaining how inconsistencies were resolved, the evaluator has not maintained appropriate skepticism.

Category 4

Report Quality Issues

The written report is the primary vehicle through which an evaluator's findings and reasoning are communicated to the court. Problems in the report itself can reveal, or create, significant deficiencies in the evaluation.

Conclusions Not Supported by the Evaluator's Own Findings

When recommendations in the conclusions section of a report cannot be traced to specific findings documented elsewhere in the same report, there is a logical gap the evaluator has not bridged. This is one of the most significant quality failures in a custody evaluation. The document itself shows that the recommendation is not grounded in the data.

Missing Sections or Topics

A thorough evaluation report addresses all relevant domains: parenting history and functioning, children's developmental needs, psychological test results, collateral contact summaries, records review findings, risk factors, and the reasoning supporting each recommendation. When expected sections are absent, particularly where serious concerns were raised, the omission is itself a red flag.

Vague or Boilerplate Language

Generic language that could appear in any custody report, such as observations that both parents love their children, that the children are bonded to both parents, or that co-parenting communication should improve, without specific factual grounding suggests a formulaic approach rather than genuine engagement with the particular family and its dynamics.

Lack of Specificity in Findings

Findings should be specific and grounded in identifiable observations, records, or statements. A report that characterizes a parent as "concerning" or "somewhat problematic" without identifying specific behaviors, documented incidents, or test indicators that support that characterization does not give the court a meaningful basis for evaluating the conclusion.

Failure to Address Alternative Explanations

A credible evaluation acknowledges competing interpretations of the available data and explains why the evaluator found one interpretation more persuasive than another. When a report proceeds to its conclusions without acknowledging that the evidence could support more than one reading, the evaluator has not demonstrated the kind of reasoning that distinguishes rigorous analysis from advocacy.

Inconsistencies Within the Report

Internal inconsistencies, findings in one section that are contradicted by findings in another, or conclusions that conflict with the data presented, suggest either careless drafting or a report assembled to justify a pre-formed conclusion rather than to document a careful process. Either way, the inconsistencies undermine the reliability of the document as a whole.

Category 5

Procedural Problems

How the evaluation was conducted matters as much as what was concluded. Procedural irregularities signal not only that the process was flawed, but that the data produced by that process may be unreliable.

Unequal Time with Each Parent

A procedurally sound evaluation provides roughly comparable interview time and contact with each parent. When one parent receives substantially more interview time, more follow-up contact, or more thorough questioning, the evaluator has gathered a richer picture of one party than the other, a structural imbalance that undermines the neutrality of the process.

Inadequate Child Interviews

Children are the subject of custody evaluations, and their perspectives, obtained through age-appropriate, non-leading inquiry, are important data. Brief or superficial child interviews, interviews conducted in the presence of a parent, or interviews that fail to explore the child's own experience and relationships do not yield the kind of information a complete evaluation requires.

Failure to Observe Parent-Child Interactions

Direct observation of each parent with the children provides data that interview and testing cannot replicate: how a parent engages with the child, manages their behavior, responds to the child's emotional states, and supports or undermines the child's relationship with the other parent. When an evaluator forgoes direct observation or observes interactions in an asymmetric way, a valuable window into actual parenting behavior is lost.

Inadequate Documentation of Process

A well-documented evaluation should generate records, including interview notes, contact logs, records reviewed, and correspondence, that allow reconstruction of the evaluator's process. When an evaluator has sparse or absent process documentation, it is impossible to verify what information was gathered, from whom, and how discrepancies were handled. Poor documentation is both a quality indicator and a practical obstacle to meaningful cross-examination.

Ex Parte Contact with One Party

Evaluators are expected to maintain neutrality throughout the process. Informal or substantive communications with one party outside the structured evaluation process, or permitting one party's attorney to provide extensive background materials that the other party is not informed of, compromises the evaluator's neutrality and may have affected the conclusions in ways that cannot be fully traced.

Failure to Clarify or Follow Up on Inconsistencies

When accounts from different sources contradict each other, which is common in high-conflict cases, the evaluator's job is to pursue the discrepancy, not to simply record both versions and move on. Follow-up questions, additional records requests, or targeted collateral contacts are the tools of a diligent evaluator. An evaluation that documents contradictions but makes no effort to resolve them is incomplete.

A Single Red Flag May Not Invalidate an Evaluation

The significance of any given red flag depends on its nature, severity, and the degree to which it affected the conclusions. A forensic psychology consultant can help identify which deficiencies are most consequential and how to present those deficiencies effectively in litigation.

Taking Action

What to Do If You Spot Red Flags

Identifying problems in a custody evaluation is the first step. Translating those problems into an effective litigation strategy requires specific, deliberate action.

For Attorneys: Retain a Forensic Psychology Consultant

A forensic psychology consultant can analyze the evaluation in detail, examining methodology, testing practices, report quality, and procedural conduct, and provide a documented critique identifying the specific deficiencies that are most legally significant. That analysis can then inform:

  • Cross-examination questions designed to surface methodological gaps
  • Grounds for a Daubert or Frye challenge to the evaluator's methodology
  • A framework for presenting the critique to the court in a comprehensible way
  • Identification of the most damaging inconsistencies within the evaluation itself

Request a Second Opinion or Rebuttal Expert

In many jurisdictions, a party can request appointment of a second evaluator or retain their own expert to conduct an independent evaluation or critique the existing one. A rebuttal expert:

  • Provides an independent assessment of the original evaluator's methodology and conclusions
  • Documents specific deficiencies in a format suitable for court presentation
  • Can offer an alternative framework for understanding the psychological dynamics in the case
  • Is available for deposition and, where separately retained, for trial testimony

For Families: Work Through Your Attorney

If a custody evaluation appears flawed, the most effective path is to work through legal counsel rather than attempting to address the issues directly. Your attorney can:

  • Obtain the complete file and all process documentation from the evaluator
  • Retain a forensic psychology consultant to review the materials
  • File motions to compel production of documentation where necessary
  • Present the critique through appropriate expert channels rather than through direct advocacy

Document Your Concerns Carefully

If you believe an evaluation was flawed, preserve the materials that support that belief:

  • Keep copies of all communications with the evaluator
  • Note and record any procedural irregularities when they occur, not after the fact
  • Retain all records you provided to the evaluator and track what was not reviewed
  • Document any collateral contacts you proposed that the evaluator declined to make
  • Preserve evidence that was available but not incorporated into the evaluation
How Dr. Tolbert Can Help

Evaluation Critique and Consultation Services

For Attorneys

Dr. Tolbert provides behind-the-scenes forensic psychology consultation on custody evaluation cases, including:

  • Detailed written critique of an existing evaluation's methodology, testing, and reasoning
  • Development of cross-examination questions targeting specific methodological deficiencies
  • Real-time consultation during expert depositions and trial testimony
  • Daubert and Frye challenge preparation
  • Expert witness identification and vetting for rebuttal testimony

All litigation consulting work is performed through the attorney engagement and protected under attorney-client privilege.

For Families

Families who believe a custody evaluation missed critical dynamics, including domestic violence, abuse allegations, or personality disorder features, can consult with Dr. Tolbert to:

  • Understand whether specific concerns in the evaluation are legally and psychologically significant
  • Identify which issues are most important to raise with legal counsel
  • Receive an educational overview of the evaluation process and what a sound methodology looks like
  • Understand what a rebuttal expert can and cannot do in their jurisdiction

Dr. Tolbert does not provide legal advice. Any legal strategy questions should be directed to the family's attorney.

Rate: engagement terms discussed during case review  |  Licensed in FL  |  Qualified in FL, MD, NV, CA, AK  |  Consultation available nationwide

Request an Evaluation Critique or Consultation

Contact Dr. Tolbert to discuss a custody evaluation and whether a forensic psychology consultation can help strengthen the case.

Inquire About Availability Call 561-429-2140

Frequently Asked Questions

What are the most common red flags in a child custody evaluation?

Common red flags include inadequate collateral contacts, failure to review relevant records, over-reliance on a single data source, rushed timelines, inappropriate test selection, failure to use validity scales, one-sided conclusions unsupported by the evaluator's own data, unequal interview time with each parent, and failure to observe parent-child interactions. Any one of these can undermine the reliability of an evaluation's conclusions.

Can a flawed custody evaluation be challenged in court?

Yes. A flawed custody evaluation can be challenged through cross-examination, by retaining a rebuttal expert who critiques the methodology and conclusions, or through a Daubert or Frye motion challenging the scientific basis of the evaluator's methods. The strength of the challenge depends on identifying specific, documentable deficiencies in the evaluation process and the reasoning supporting the evaluator's conclusions.

How can an attorney identify bias in a custody evaluation?

Bias indicators in a custody evaluation include recommendations that are not adequately supported by the evaluator's own data, dismissal of documented domestic violence or abuse allegations without explanation, failure to acknowledge or address personality disorder dynamics identified through testing, significant imbalance in interview time or collateral contacts between parties, and language that characterizes one parent in markedly more sympathetic terms than the other without factual grounding.

What does a custody evaluation critique or second opinion involve?

A custody evaluation critique involves a systematic review of the evaluator's methodology, the completeness and quality of the data gathered, and the reasoning connecting that data to the evaluator's conclusions and recommendations. The reviewing expert examines psychological test selection, administration, and interpretation; assesses the scope of collateral contacts; identifies procedural irregularities; and documents where conclusions are not supported by the evaluator's own findings.

Disclaimer: Dr. Tolbert is a licensed psychologist, not an attorney. She does not provide legal advice or legal representation. Litigation consulting does not create a psychotherapist-patient relationship, and no psychotherapist-patient privilege applies. When retained through an attorney, communications are protected under attorney-client privilege and work product doctrine. Full Disclaimer