Table of Contents Table of Contents
Previous Page  19 / 108 Next Page
Basic version Information
Show Menu
Previous Page 19 / 108 Next Page
Page Background






Proving Public Diplomacy Programs Work



ast year, the Advisory Com-

mission on Public Diplomacy,

a bipartisan committee estab-

lished in 1948 to assess and

appraise the United States’ PD activities,

released a report, “Data-Driven Public Diplomacy: Progress Toward Measur- ing the Impact of Public Diplomacy and International Broadcasting Activities.”

Like many similar reports over the years,

the ACPD study is generally optimistic

about the success of the State Depart-

ment’s public diplomacy programs. It

further assumes that recent advances in

data collection and analytics will help

us better demonstrate their success, by

proving their impact.

At the same time, the report takes a

hard look at the current state of public

diplomacy evaluation, making it clear

that “progress toward” measuring the

impact of public diplomacy is not the

same thing as actually being able to

measure it.

The uncomfortable truth that this

report and others like it highlight is that

after more than 70 years of institutional-

ized public diplomacy activities, we still

can’t empirically verify the impact of

most of our programs.

A consequence of this failing was

highlighted by the State Department

in its 2013 inspection of the Bureau of

International Information Programs.

Ironically, as public diplomacy programs

have become more strategically focused,

they’ve also become harder to manage

and evaluate.

James Rider is a mid-level public diplomacy-coned Foreign Service officer who is

currently the political-economic section chief in Libreville. He previously served in

Caracas and Tel Aviv. In 2013, he won AFSA’s W. Averell Harriman Award, recogniz-

ing constructive dissent by an entry-level Foreign Service officer.

The Office of the Inspector General’s

findings raised serious questions about

the lack of an overall public diplomacy

strategy at the department:

The absence

of a departmentwide PD strategy tying

resources to priorities directly affects IIP’s


Fundamental questions remain


[emphasis added]


What is

the proper balance between engaging

young people and marginalized groups

versus elites and opinion leaders? Which

programs and delivery mechanisms work

best with which audiences? What pro-

portion of PD resources should support

policy goals, and what proportion should

go to providing the context of American

society and values? How much should PD

products be tailored for regions and indi-

vidual countries, and how much should

be directed to a global audience?

These questions are relevant for

everyone involved in public diplomacy

work, not just IIP. I believe that the main

reason we are still left with so many

“unresolved fundamental questions”

about the nature of our work is because

of our continued inability to measure the

impact of our programs. It is impossible

to accurately allocate resources to priori-

ties when you don’t actually know what


But why haven’t we been able to

measure our impact? A review of recent

studies suggests some answers.

We Do Not Value


One reason has to do with the long-

standing deficiencies of public diplo-

macy measurement and evaluation regi-

mens. An astonishing fact highlighted

in the advisory commission’s report is

that in 2013 the Bureau of Educational

and Cultural Affairs (ECA, the PD

bureau that manages our best-known

educational and exchange programs)

allocated only .25 percent of its budget

for program evaluation. The percent-

age allocated by other PD bureaus and

offices was not much higher.

For comparison, the report notes

that the industry average for evaluation

spending is 5 percent. The University of

Southern California’s “Resource Guide to Public Diplomacy Evaluation” says