Esprit de l’escalier: how blogs can help government agencies and public servants do their jobs better

I participated in an enjoyable discussion on open government on Late Night Live last night. If one has been thinking about things for a long time and wants to get certain ideas across, it can be pretty challenging doing this effectively – which is to say without misunderstanding – on a panel program, though I can’t complain. Phillip Adams was moving the discussion along, as is his job, and I wasn’t usually the victim of being cut-off.

Even so, the one thing that concerned me when I’d concluded was that I wasn’t able to directly discuss the idea that one of the panelists – Andrew Podger – seemed to suggest. I’d preface what I’m saying by saying that I’ve met Andrew on a number of occasions, and, like many people in Canberra, I have a very high regard for him. Andrew seemed to think that the idea of public servants blogging was really a bit alarming, perhaps flip. He was concerned that there was no room for public servants to be blogging about what they were briefing ministers about. I would generally agree. But then this really illustrates my argument – articulated briefly on the show – that when we debate this issue we don’t really deliberate on where and how social media like blogging could add value. Rather we focus on the extremes, and on what can go wrong and the default rapidly becomes a silence that is in no way compelled by the public service values we’re trying to defend.

There is much more that public agencies do, and much more that public servants do other than offer confidential and potentially politically contested advice to ministers. What I was at pains to try to point out was that the default right now is silence and that that foregos a lot of exciting opportunities.

I generally agree that there needs to be some government ‘privacy’ if you like around what public servants are advising governments. In a world of confrontation between Opposition and Government, all played out in the context of a media hungry for the only story they really want to write about – conflict – not doing so would compromise the advice. On the one hand it would tie the hands of politicians and make it harder for them to come to their own decision on what to do if it did not accord with their official advice. On the other, and in response, a lot of pressure would be put on public servants to provide the ‘right’ advice – the advice the ministers want to hear.

But there are so many other ways in which blogging and other uses of Web 2.0 could be useful. Especially in a small country, there’s a limited pool of people with real expertise about any number of things – say a technical matter like the management of tropical rainforest. Say provisions of the Tax Act.  Now it is quite possible to imagine discussion about such things that is politically partisan.  And so it should be avoided as contrary to the aspirations of the public service.

But it also possible to imagine professional discussion of such things that is focused on information sharing and professional discussion and that is not politically partisan.

Most obviously one can do this when one is running an inquiry and we did it in the Government 2.0 Taskforce. We avoided political partisanship and we did so easily and I would have thought with minimal risk.  And it was highly successful in involving people, having them feel listened to, in spreading the word of our inquiry and in drawing in experts from around the world.  So I can’t see why those bodies which are charged with conducting independent public inquiries – such as the Australian Law Reform Commission (which is probably our leading policy inquiry body in its attempts to explore online engagement),  the Productivity Commission the ACCC and any number of other bodies that currently are not using blogs to help inform their (independent) deliberations or not doing it much.

Now in fact any large organisation is running inquiries all the time. They’re trying to sort out this or that, considering changing the way they do something, doing policy research into one thing or another. This is quintessential knowledge work. And they are also doing things that it can be beneficial to let people know about.  Sometimes senior public servants will reasonably take the view that saying that one is doing a whole lot of new work on something might be politically contentious in itself. But there are any number of relatively mundane reviews of things where this is not the case.

In some areas one would need to be more constrained – for instance in talking about taxes – and one’s level of circumspection might have to rise when political partisanship was particularly strong around a particular matter – say the efficacy of fiscal policy right now. But there are still plenty of things one could talk about.  One could for instance have technical discussions about the tax statistics and how they could be improved, the thin capitalisation rules, practices in other countries or any number of discussions.

In my deliberations on government bodies, I’m frequently struck by the necessary limitations on our knowledge and how useful a bit of blogging would be – and how rarely it would raise any risks of being perceived as politically partisan. To take one example, a subject that often comes up is ‘how do we measure what we’re doing?’.  Now how would blogging about such a thing jeopardise public service values?  It would simply make it clear that the public service was curious and keen to get in the input of those who might have some really good ideas, expertise or experience.

So there are lots of ways in which public servants could blog. And yes, there are ways they shouldn’t blog.

16 thoughts on “Esprit de l’escalier: how blogs can help government agencies and public servants do their jobs better

  1. It’s not just Andrew Podger who finds the idea of blogging alarming. People confuse the platform with they style and content of the first blog they think of. As a result, they have visions of an assistant secretary going Mr Kurtz on the internet.

    No sane departmental secretary would want one of her underlings running an Andrew Bolt style blog that attracts devoted nutters and gets quoted in the papers (“Climate change ‘a good thing’ says environment department official”). But that’s not what you’re suggesting.

    It’s a shame this discussion was framed as an freedom of information issue. It raises the spectre of public servants disclosing protected and in-confidence information. But one of the major benefits of blogging is that public servants can use it to gather information from outsiders.

    You rightly say “there’s a limited pool of people with real expertise about any number of things”. Blogging could be a great way of finding these people and working out whether engaging with them is useful. And even better, the process takes place in the open. Nobody can claim that their rivals have unfair behind-closed-doors access.

    The orthodox approach to locating and contacting experts is to go up and down organisational stove pipes. A public servant drafts a letter or email (taking care to place a copy on file), has it cleared and endorsed by their manager and sent to the relevant expert’s superior. By the time the process is complete, the issue may well be dead. Or, the person is too busy with something else to be any use. The process is a deterrent to engagement.

    Inviting discussion on a policy issue doesn’t require bureaucrats to disclose information that hasn’t been disclosed before. The host can answer questions from documents that have already been cleared, moderate discussion and ask questions.

  2. I listened to that discussion, Nicholas, and you came across well.

    I was surprised, though, at how little time was spent on the many intriguing and potentially serendipitous aspects of Government 2.0, the sort of things you outlined in your editorials late last year. They generated a sense of excitement and opportunity which this program didn’t quite manage. As Don said, FOI issues hijacked pretty much the whole thing. Most unfortunate.

    Maybe you can talk Philip into having another go.

  3. Yes, we headed for the dilemmas rather than the opportunities. Of course the dilemmas need to be understood and dealt with, each as best we can, but you don’t end up with any taste for engaging with the medium, for getting the most out of it if you are unfamiliar with what it’s potential is.

  4. Actually, I wonder if taxes is not such a good example for just the reason you give – the circle of people with expertise on any given subject is so small, that everyone knows each other and the debates you are talking about happen, by email, formal submissions, telephone and conference (ie the National Tax Liaison Group). There are obviously some vested agendas but overall very little partisanship. I don’t think anyone very sane complains that they have unrecognised special knowledge of the tax law. In a way what goes on with taxes is like a permanent inquiry.

    If Treasury blogged on, for example, the proposed changes to the controlled foreign companies rules, this would, in my perhaps arrogant opinion, be unlikely to add much to the conversation.

    The more meta blogging you referred to at the end, however, (‘how do we measure what we’re doing?’), could be

    But I think, and I think this is what you said before, the best place to start is the low-hanging fruit – just releasing information. Surely, starting with that would help make sure that when the blogging started it came from a sounder cultural foundation.

  5. Nick, I was not meaning to be alarmist in cautioning about the use of blogs. My concern relates to the need for guidance to public servants, beyond those issued by the Public Service Commissioner (who also recommends agency heads issue such guidance to their staff).

    Even somewhat independent agencies such as the Productivity Commission are likely to place some constraints while promoting engagement between their professional staff and external experts, given the benefits of a disciplined approach (supported by their legislation) to releasing draft reports and discussion papers. For those agencies more intimately involved in the deliberations of ministers, extra care will be needed to ensure the trust and confidence of ministers is not endangered. The effect of lost ministerial confidence might be quite counterproductive, with ministers restricting their deliberation to ‘trusted’ political advisers and giving excessive weight to interest groups rather than rational analysis in the public interest.

    I mentioned during the program that I have always supported the publication of papers by agencies and their staff, and their participation in expert forums. Even senior officers should be giving public speeches from time to time to explain the background to government policies and programs, without crossing the line of their non-partisanship. Technology change means such exchanges may often these days be via internet blogs of various sorts. I also strongly support the idea of placing data banks on line to allow outsiders to explore them without offence to privacy etc (as the ABS does with its various data cubes).

    I hope this clarifies my position.

  6. Andrew

    While not wanting to interpose myself in a dialogue between you and Nicholas, surely the proposition that public servants must confine themselves to using blog forums only for “the publication of papers by agencies and their staff, and their participation in expert forums” is unnecessarily narrow and constipated.

    Surely at least suitably qualified and experienced public servants can be trained and trusted to recognise situations where they may make statements with the potential of being seen as partisan or extending beyond existing government policy, and to adopt strategies and follow procedures which ensure that they avoid doing so.

    Some research in which I was engaged last year suggested that the Commonwealth might avoid any such danger by having consultation blogs moderated by non-public servant consultants, partly in order to provide a sort of “cut-out” or circuit breaker where the consultant could easily be “disowned” if necessary by government if he/she said anything too embarrassing or unauthorised. However I think that’s an unduly nervous, defensive position. It shouldn’t be too difficult for any moderately experienced, intelligent middle-ranking public servant to recognise and avoid situations where one might say such things, and the potential benefits of engaging in a real and interactive way with interested members of the public are likely to outweigh any slight risk.

    Finally, I think that confining blog interaction to designated “expert” fora would drastically restrict their potential utility as fertile ground for creative thinking, which is far more likely to emerge from a reasonably free-flowing (but carefully moderated) conversation between interested community members with diverse expertise and interests, with occasional careful interventions by appropriate public servants to keep the discussion “on track” and productive. The public servant/moderator’s role will generally be to seek to solicit and clarify input from members of the public, not to express the public servant’s own views. It shouldn’t generally be a difficult distinction to keep in mind. One obvious procedural rule to avoid the sorts of dangers you fear could be that, where a public servant moderator perceives that the discussion is being derailed by misconceptions about existing government policy or practice, he/she is obliged to refer the matter upwards for approval before responding and correcting the misconception.

  7. Thanks for your response Andrew,

    We may agree on this – I’m not sure – but it’s perhaps worth spelling some things out further.

    Our report argued that there is a wide range of activity that should be thought of as neither strictly private, nor as ‘official’ communication from the agency, but professional discussion. Before elaborating on that, let me give you an example of a use for ‘official’ blogging that is very powerful but not really in evidence.

    Sitting on the board of a government authority I frequently observe questions coming to the board that could do with wider discussion. Most recently the body I was on was considering how to measure it’s own performance. Wouldn’t that – and many other issues of a professional and not party political nature – be a good thing to open up on a blog? The board is well disposed to this idea, and no doubt we’ll learn as we go, but this is a different kind of engagement than the kind you’ve discussed above.

    It’s not just fitting up an existing routine (like a PC inquiry) with some additional tools – as worthwhile as that may be. It is taking advantage of the technologies that now exist to do what the new APSC guidelines call for with their reference to “Web 2.0 provid[ing] public servants with unprecedented opportunities to open up government decision making and implementation to contributions from the community.”

    Of course although other things being equal any public decision maker would be keen to involve the community more rather than less, I think the example shows is not not Web 2.0′s capacity to enhance democracy. At least in this instance the issue is fairly technocratic rather than democratic. But just as a proper public inquiry can – at considerable cost and delay – improve the basis of decision making – web 2.0 now allows the holding of (what one senior Cth public servant described as) mini-inquiries on many matters of importance as one goes along. We should take up the opportunity it gives us to make more informed decisions and to simply proceed on many matters in a more informed way.

    Turning to the issue of professional discussion – as opposed to ‘official’ and ‘private’ discussion, it is clear to those of us who practice in this medium that participation on blogs is extraordinarily useful for ‘knowledge workers’. It helps test ideas, and make connections both between ideas and between people interested in the same subject and able to offer something to each other. There will undoubtedly continue to be constraints on senior public servants as to how they can and should express themselves in such forums. But the current default rule is silence. Public servants read blogs – and plenty of them come to this site – but except for a very few pseudonymously identified participants, nary a comment is made. It’s a pity don’t you think?

    A year or so ago an academic who keeps a blog was seconded to the public service and it was made clear to them that they were not to continue blogging for the duration. The alternative would have been to have said to them that they had to bring the blog within the PS code of conduct while they were in the PS. The particular blog in question had lots of stuff on it which was simply informational – abstracts of important newly released articles from the field. Other stuff was administrative, and some of it was opinion. Some of the opinion was professional opinion and so it might have been appropriate to tone some of it down, but much of it was useful discussion on professional points.

  8. This illuminating discussion would be invaluable if permanently linked to Gov2.

    Thank you Nicholas Gruen (Chair Gov 2 Taskforce) for clarifying things and making another opportunity for discussion on this important topic and to all others who commented. As a newcomer I am attempting to catch up on reading and understanding, so I am pleased to find another forum where I can familiarize myself with the background to Gov2 and the innovative and serious thought that led to the submission of the Final Report of the Taskforce in December.

    On 11 April I posted a blog on Gov2 (Report with a plea to Senators Lindsay Tanner and Joseph Ludwig is to approve the recommendations of the Gov2 Taskforce as tabled on 22 December 2009.

    There are so many reasons to endorse moves that will embrace government reform through innovative initiatives as the recommended in the Government 2.0 reform proposal – with the aim of “making our public service the world’s best.”
    There is much room for collaboration with partners within and outside government, to look for stitch-in-time option where things may be going wrong, and to identify good ideas that can be collaboratively and transparently considered in the formation of

    “closer and more collaborative relationship with their government. Australia has an opportunity to resume its leadership in seizing these opportunities and capturing the resulting social and economic benefits.”(Key Point 2 of Final Report).

    The Report pointed to the importance making changes to leadership, policy and governance in order to

    “shift public sector culture and practice to make government information more accessible and usable; make government more consultative, participatory and transparent; build a culture of online innovation within Government; and promote collaboration across agencies”

    Darren Whitelaw General Manager of Corporate Communication at Victoria’s Department of Justice in his invited Guest Post on 31 December expressed his personal view of the costs of setting up and delivery of Gov2, but he also of the risks of not implementing this existing project with so much potential to develop a more collaborative, inclusive and democratic government. I support that view and all of the recommendations of the Taskforce.

    I have cited the visionary views of David Adams, Peter Shergold, John Faulkner, Roger Wilkins, Eddie Molloy and others on best practice governance and self-regulation

    In my related blogs on Gov2 on 9 April on “The Faceless Bureaucrat” (comment #12923) and later on “If I Had a Blank Piece of Paper….” (comment # 12945) I discussed Eddie Molloy’s article in the Irish Times (9 April 2010) entitled “Seven things the public service needs to do.”

    Molloy discussed transparent accountability; independent external scrutiny; effective sanctions – accountability with consequences; abandoning the belief in gifted generalists; establishing the managerial role throughout the civil service; restoring the capacity and powers of the civil service to act as a bulwark against reckless political decisions; establishing a full cabinet ministry responsible for public service reform. Molloy believes that reform from within is extremely rare. He refers to the impacts of embedded cultures.
    Overcoming cultural and political barriers to effective reform are I believe amongst the most challenging tasks not just for the Gov2 Project, but for the nitty gritty of public policy management.

    Both Molloy (Irish Times.com 9 April 2010) Peter Kell’s have expressed views on “unworkable” or “half-baked” self-regulation” in relation to corporations (see Peter Kell “Keeping the Bastards Honest – Forty Years on National Consumer Congress 2005; Peter Kell Consumers, Risks and Regulation NCC; “Holding Corporations to Account NCC 2007.

    Though Kell was perhaps referring to corporations providing goods and services in a commercial context, I believe the observations are as valid to providers of public services, however they may be structured as incorporated bodies with limited guarantee but without share portfolio. Examples include regulators like the AER, Rule Makers like the Australian Energy Market Operator (AEMO) (previously NEMCO; the Australian Energy Market Commission (AEMC); Policy-Makers like the Ministerial Council on Energy (MCE)

    The Former Cabinet Secretary and Special Minister for State Faulkner, Senator John, in his Address on Transparency and Accountability Agenda 30 October 2008

    “Ladies and gentleman:

    United States Senator Alan K Simpson once said: “If you have integrity nothing else matters. If you don’t have integrity nothing else matters.”
    Transparency ensures appropriate visibility to government actions and the political process. I’ve personally taken the view after many years in both politics and Parliament that there’s no better way to achieve integrity and accountability within government and government transactions than by promoting transparency and openness.

    Australians must be able to know how their government works and have confidence that authority is exercised appropriately.”

    It is those sentiments and observations that I highlighted in my plea to Senator Lindsay Tanner and Senator Joe Ludwig whilst they consider the implications of the Taskforce’s recommendations in their Final Report.

    In his brilliant submission to the Gov2 Taskforce Issues Paper Andrae Muys Senior Software Engineer in Metadata and Informatics discusses the Web 2.0 view of even traditional documents as dynamic “living records” with a transparent revision history and “the need for a re-evaluation of the legislative, regulatory, and cultural norms relating to the participation of public servants in the public sphere. Specifically a need to alleviate the unreasonable level of jeopardy they face though participation with Web 2.0.”
    In particular Muys suggests that: “We need to change our perception of the drafting process from a process of drafting and subsequent publication to a process of curation and moderation.”

    Muys recognizes that fear of public criticism may hamper transparency and other Gov2 goals, and recommends that “public servants need to be provided room to fail”, if they are not to be forced into paralysis or subversion of the access policy. To operate successfully Gov 2.0 must accept the existence of errors and implement tight corrective feedback loops seeking a trajectory of increasing accuracy.”

    There are indeed many sensitivities to be overcome and addressing the cultural barriers may represent the most challenging of all tasks. With due care and recognition of the pitfalls these barriers can and should be overcome.
    I for one would very much like to see the Gov 2 Taskforce recommendations implemented.

    Regards

    Madeleine Kingston

  9. Hi Nicholas (Nicholas Gruen, Chair Gov 2 Taskforce)

    Returning to your posting of 5 March 2010 “Esprit de l’esclarier: how blogs can help government agencies and public servants do their jobs better, and your own response to Andrew Podger’s posting of 15 March I would like to particular pick up on your comments about evaluation of self-performance.

    A good while ago I collated a number of evaluation principles gleaned from postings and writings of excerpts on the topic, some forming part of the online internationally-based American Evaluation Association (AEA) Discussion Group known as EVALUTALK.

    With full citation and attribution to the authors I incorporated many suggestions from that splendid resource, to which I subscribe, avidly reading the postings made. Other sources including from Michael Patton’s work. The suggestions were incorporated in a number of by submissions to the public arenas including MCE published on http://www.ret.gov.au under my name; the Victorian Essential Services Commission (Part 2A to their 2008 Review of regulatory Instruments) and to the Productivity Commission.

    The context was in terms of formal evaluative assessment undertaken by professionals, but nevertheless the list below does raise some general evaluative questions that it would be prudent for any organization to ask of themselves or subject themselves to for external evaluation.

    I hope you will not mind my including them here, and possibly also posting them formally on the Gov2 website in a suitable place, since I am not sure whether Club Troppo postings are linked to Gov2.

    ***
    SOME BURNING EVALUATION PRINCIPLES CONVERTING THEORY INTO PRACTICE:

    How many of these principles were adopted in the various evaluative processes undertaken by those guiding or undertaking major or minor policy reform in various State, Commonwealth or advisory arenas?
    They may assist with general evaluative and record-keeping best practice principles – for all policy, regulatory and other entities working in the public policy arena.

    Recommendations: General evaluative principles

    1. What was the evaluand {Funnell and Lenne 1989} at several levels, mega, macro and micro, since different stakeholders will have different concerns at each of these levels {Owen (1999:27}.

    2. In choosing design and methods, were any cautions used against replacing indifference about effectiveness with a dogmatic and narrow view of evidence {Ovretveit, 1998:}.

    3. What external threats were identified and considered before the data gathering exercise was undertaken?

    4. What comparisons were used?

    5. What were the boundaries and objectives?

    6. Was an evaluability assessment undertaken to more precisely determine the objectives of the intervention, the different possible ways in which the item could be evaluated and the cost and benefits of different evaluation designs
    (Wholly JK (1977) “Evaluability assessment” in L Rutman (ed.) Evaluation Research Methods: A Basic Guide, Beverly Hills, CA: Sage) and
    Wholly JK (1983), “Evaluation and Effective Public Management”, Boston: Little, Brown c/f Ovretveit Evaluating Health Interventions. Open University Press. McGraw-Hill (reprinted 2005), Ch 2 p 41)

    7. What were the implied or explicit criteria used to judge the value of the intervention?
    8. Which evaluation design was employed was employed, since a decision on this issue would impact on the data-gathering measures?

    9. Was the evaluative design in this case case-control, formative, summative, a combination of process (formative) and summative; cost-utility or audit? Will assessment of the data gathered be contracted out to an informed researcher or research team with recent professional development updates and grasp of the extraordinary complexities in the evaluative process?
    (Patton, M. Q. (2002) “Qualitative Research & Evaluation Method” Sage Publications).

    In addition there some excellent resources more current focused on the not-for-profit sector which I can gather together at some stage and reproduced on this site or Gov2 or both if appropriate.

    This is such a topical matter that I thought I could do with highlighting again.

    I applaud your view that perhaps Gov2 discussions could be undertaken more broadly since there are so many useful considerations that may not fall neatly into its current parameters. What do you think?

    Kind regards

    Madeleine (Kingston)

  10. Hi again

    Sorry I short-changed everyone. My original list, gleaned by me from several sources was much longer than I thought, so here’s the rest of it and should be read in conjunction with the suggestions above.

    The material had formed part of several submissions made to public consultative arena including the AEMC, ESC (Vic) and MCE arenas published on the RET website http://www.ret.gov.au

    Note I have retained the numbering sequence from the above post which ended with Evaluative question 9 (Patton)

    10. How was the needs assessment conceptualized?

    11. Was the program design clarifiable?

    12. How was the formative evaluation undertaken?

    13. What are or were the Program Implementation process evaluation parameters?

    14. What measures will be in place for evaluating the “settled program” (or policy change proposed)?

    15. How were short term impacts by conceptualized and identified for the proposed changes?

    16. What definitive outcomes are sought and how will these outcomes be determined by follow-up?

    17. Was/will there be time to activate the evaluation’s theory of action by conceptualizing the causal linkages? Whilst not ideal, if no theory of action was formulated, perhaps it is not too late to partially form a theory of action plan.

    {Patton, M. E. “The Program’s Theory of Action” in Utilization Focussed Evaluation, Sage, Thousand Oaks, 1997, pp 215-238}

    18. Was there be room or time in the data-gathering exercise to probe deeper into the answers provided by the people whose lives will be affected by any decision the Government may make to deregulate within the energy industry?

    19. The skilled questioner knows how to enter another’s experience?

    {From Halcom’s Epistemological Parables c/f ibid Qualitative Research and Evaluation Methods, Ch 7 Qualitative Interviewing}

    20. As Eyler (1979) said What are figures worth if they do no good to men’s bodies or souls?

    {c/f Ovretveit (1997) “Evaluating Health Interventions”. Open University Press. McGraw-Hill (reprinted 2005), Ch 1}

    21. What was be done do assess the intended impacts of the studies undertaken.

    22. Before the data-gathering exercise was undertaken, and considering the time constraints were these factors considered: feasibility, predictive value; simulations; front-end; evaluability assessment?

    23. What processes will be undertaken to ensure added-value components to the evaluation?

    24. How will the agencies/entities utilize case study example in augmenting the existing relatively generic study undertaken addressing standard demographics over a large sample without sub-segmentation of more vulnerable groups (such as residential tenants or regional consumers) with more in-depth evaluation?

    25. How carefully will the agencies/entities in their parallel Review/Inquiry review in tandem program documentation, especially where there is overlap; or examine complaints and incident databases; form a linkage unit for common issues.

    26. To what extent have the following evaluative process been undertaken by both bodies, and all Commonwealth and State bodies including the MCE and COAG Teams, policy advisers and policy-makers regulators {See Centre for Health Program Evaluation, Melbourne University

    27. Does all of the government, quasi-government, regulators and others a plan by which program analysis can be undertaken formally, and by which success criteria can be measured as the desired features of the outcomes represented in the outcomes hierarchy, defining more precisely the nature of the outcomes sought and the link between the stated outcome and the performance measures for that outcome in terms of both quantity and quality?”

    {See Funnell S, Program Logic (1997): “An Adaptable Tool for Designing and Evaluating Programs” in Evaluation News and Comment, V6(1), pp 5-17}

    a) How will the success of the policy changes ultimately effected be monitored and reevaluated and how often. Specifically, will there be a second phase of evaluation as one of accountability to managers, administrators, politicians and the people of Australia?

    b) What will be the rule change policy that will be transparent and accountable not only internally but to the general public as stakeholders?

    c) Generic protections such as those afforded by current and pending trade practices and fair trading provisions are currently insufficient and not quite as accessible as is often purported.

    d) Within an industry that represents an essential service and where large numbers of vulnerable and disadvantaged consumers (not just on financial grounds) are underrepresented how will the Government ensure that the rights of specific stakeholder groups are not further compromised?

    e) How accessible will Rule Changing be?
    (my question to the AEMC – which remains topical given a new proposed Rule Change on embedded generation arrangements and outsourcing that also has relevance to a current AER Determination impacting on outsourcing and cost determination – discussed elsewhere)

    f) How will the success of the policy changes ultimately effected by monitored and re-evaluated and how often. Specifically, will there be a second phase of evaluation as one of accountability to managers, administrators, politicians and the people of Australia?

    g) In choosing design and methods, what will be done about replacing indifference about effectiveness with a dogmatic and narrow view of evidence

    {Ovretveit, 1998:}.

    What will be the rule change policy that will be transparent and accountable not only internally but to the general public as stakeholders?

    a) How accessible will Rule Changing be?

    b) Perhaps the agencies and entities would consider seeking specialist evaluation input with further evaluation of data when making major regulatory reform decisions

    c) Does Government have a plan by which program analysis can be undertaken formally, and by which success criteria can be measured as the desired features of the outcomes represented in the outcomes hierarchy, defining more precisely the nature of the outcomes sought and the link between the stated outcome and the performance measures for that outcome in terms of both quantity and quality?”
    {Funnell, S, (1997) “Program Logic: An Adaptable Tool for Designing and Evaluating Programs” in Evaluation news and Comment, V6(1), pp 5-17}

    Evaluation is a sophisticated and scientific professional challenge. It is not just a trade, though compromises often make it so. Professional evaluators are humble people. They make no pretenses. Regardless of reputation or status, they are never too humble to ask for collaborative input and peer opinion and suggestion. Evaluation is a continuing process and does not start and end with data gathering. They recognize the challenges of best practice data gathering and evaluation and do not pretend to have all the answers.
    For instance, check out the University of Alabama’s EVALUTALK facility, American Evaluation Assocviation Discussion Group

    This group is the cutting edge of evaluative practice. The rest of the world respects the results this group achieves.

    One such evaluator could be Bob Williams a highly respected NZ evaluator with an international reputation and particular expertise in public policy evaluation. He is a frequent visitor to Australia, and is a fairly well known figure in Australasian evaluation, through evaluations, his work within the Australasian Evaluation Society (AES) (which merged with Evaluation News and Comment under Bob Williams’ supervision) and his contributors to the two Internet discussions groups Evalutalk and Govteval. He has vas experience of Governmental evaluations.

    On the online Evaluator’s Forum, EVALUTALK, Bob Williams responded that evaluators should not been seen as mere technicians doing what they are asked to do, but should be seen as craftspeople with a pride in their work and the outcomes of their findings long after the consultative process is over.

    Williams’ specialty is evaluation, strategy development, facilitating large-group processes and systemic organizational change projects. He has his own website under his name.

    Reviews books for Journal Management Learning, writes for Australasian Evaluation Society’s Journal. He wrote the entries on “systems” “systems thinking” “quality” and planning Encyclopaedia of Evaluation {Sage 2008) and co-written with Patricia Rogers in “Handbook of Evaluation” {Sage 2006}.

    There is a great deal of valuable consultative evaluation advice out there for the asking. Lay policymakers are not normally trained in this area.

    Bob Williams, has commented as follows on EVALUTALK:

    “The Ministry of Education here in New Zealand has been doing something very interesting for the past four or five years. The policymakers along with teachers university researchers and others have been developing a series of “best evidence syntheses”.

    The concept of “best evidence” is fairly comprehensive with a set of agreed criteria for what constitutes “best” and “evidence”. As each synthesis is developed it is opened up for discussion with practitioners and academics – and placed on the Ministry of Education’s website. I was involved in some of the early discussions (as a facilitator rather than evaluator) and was impressed by both the method and the content of the syntheses.

    What I found most impressive was that the policymakers were brave include evidence that challenged some of the assumptions that have dominated education policymaking in the past few decades (e. g. the extent to which socio-economic status effects student performance).”

    “The 2006 edition of the World Education Yearbook describes the BES Programme as the most comprehensive approach to evidence” and goes on to say: “What is distinctive about the New Zealand approach is its willingness to consider all forms of research evidence regardless of methodological paradigms and ideological rectitude and its concern in finding…effective appropriate e and locally powerful examples of ‘what works.”

    Bob Williams suggests that before data gathering is undertaken the underlying assumptions must be made, followed by identification of the environment and environmental factors that will affect the way in which the intervention and its underlying assumptions will interact and thus behave.

    A recent dialogue between evaluators on that Discussion List produced a useful list of criteria that would cover the processes that should ideally be undertaken.

    Though the inputs came from a number of Discussion List members, I cite below how Bob Williams a respected New Zealand evaluator with an international reputation summarized as follows inputs from various evaluators participating on the Discussion List.

    http://www.eval.org

    Position the evaluation – that is, locate the evaluation effectively in its context, in the broader systems.

    Bob Williams, Discussion List Member Evalutalk

    Wow that was a long post – just as well I split it up.

    Hope this is taken in the spirit intended.

    More will follow when I get a chance, assuming it is helpful material for this Gov2 Project 13 and for other readers.

    Regards

    Madeleine (Kingston)

  11. Hi Nicholas (Nicholas Gruen Chair Gov2 and others)

    The article in The Australian of 13 April 2010 concerning the state of heath policy prompted me to complete this hatrick on evaluation, though other references are also pertinent when I get to it. Hope this is not too long to post but it really belongs with the other two postings to complete the picture. Hope it is of some use in a very practical sense. I have already posted on the Gov2 site but not sure if the two are linked. In any case can’t just leave Part 3 hanging without its companions. It would be too untidy.

    I need to acknowledge that the evaluative theory models belong to others as meticulously cited. My role was to put the material together in some sort of sequence with the citations. Each author cited is responsible for the ideas.

    My last blog tome against Project 13 finished off with advice from advice from Bob Williams a New Zealand evaluator.

    I continue the thread with further discussion of that advice, quoting directly from William’s advice on positioning, interrupting with one or two observational interjections from me, and then back to the discussion between Bob Williams and Stanley Capella.

    From Bob Williams, NS Evaluator:

    “Position the evaluation – that is, locate the evaluation effectively in its context, in the broader systems.

    1. Clarify the purpose and possibilities, etc (design phase – why do it)

    2. Plan the evaluation (design phase) (what do we want to know)

    3. Data Gathering (how will we find out what we want to know)

    4. Making meaning from the data (e.g. analysis; synthesis; interpretation (how can we get people to be interested in the evaluation processes/results
    5. Using the results (shaping practice) (what would we like to see happen as a result of the evaluation and what methods promote that?)”

    MK Comment:

    This is impossible to achieve without a comprehensive informed SWOT analysis that goes well beyond background reading of other components of the internal energy market –a highly specialized exercise, especially in an immature market. Prior to undertaking the survey mentioned to ascertain market awareness, what steps were taken to mount a strengths and weakness analysis (SWOT).

    If undertaken, where can the results be located? This type of exercise is normally undertaken prior to the gathering of data so that the survey data is meaningful, is robust to address a range of relevant factors; and not simply narrowly focused on data-gathering that may yield compromised results if the goals and parameters that could have been initially identified in a SWOT analysis were not clearly identified and addressed in the study design.
    Stanley Capella on the University of Alabama Online Evaluation Discussion Group EVALUTALK has questioned whether evaluators should push for program decisions based on evaluation, or whether this an advocate’s role.

    Bob Williams responded that evaluators should not been seen as mere technicians doing what they are asked to do, but should be seen as craftspeople with a pride in their work and the outcomes of their findings.

    As suggested by Ovretveit (1997) in Evaluating Health Interventions. Open University Press. McGraw-Hill (reprinted 2005), Ch 6

    “Design is always balancing trade-off.” “Inexperienced evaluators are sometimes too quick to decide design before working through purposes, questions and perspectives.”

    “Ideas which are fundamental to many types of evaluation are the operational measure of outcome, the hypothesis about what produces the outcome, an open mind about all the (factors) that might affect the outcome and the idea of control of the intervention and variable factors other than the intervention.”

    “Randomized experimental designs are possible for only a portion of the sittings in which social scientists make measurements and seek interpretable comparisons. There is not a staggering number of opportunities for its use.

    {Webb et all 1966 c/f Ovretveit Evaluating Health Interventions, “Evaluation Purpose Theory and Perspectives” Ch 2, p31}.

    “Politicians often do not examine in detail the cost and consequences of proposed new policies, or of current policies.” Ibid, Ovretveit Ch 2, p 27
    In discussing better informed political decisions Ovretreit noted, for example, the lack of prospective evaluation or of even small scale testing of internal market reforms in Sweden, Finland and the UK. Whilst he did not infer that all new policies should be evaluated or that the results of an evaluation should be the only basis on which politicians decide whether to start, expand or discontinue health policies, just that politicians could sometimes save public money or put it to better use if they made more use of evaluation and of the “evaluation attitude.” ibid, Ch 2, p27

    In Ch 3 (p73 Ovretreit embraces six evaluation design types:

    Descriptive (type 1);

    Audit (type 2)

    Outcome (type 3); comparative (type 4);

    Randomized controlled experimental (type 5) and

    Intervention to a service (type 6)

    Each of these six broad designs can and have been successfully used in a variety of interventions targeted at examining policies and organizational interventions, depending on which of the four evaluation perspectives have been selected: quasi-experimental; economic; developmental or managerial.
    In recent years there has been increasing pressure on all scientists to communicate their work more widely and in more accessible ways.

    For evaluators, communication is not just a question of improving the public image of evaluation, but an integral part of their role and one of the phases of an evaluation. It is one of the things they are paid to do. Here we consider evaluators’ responsibility for communicating their findings and the different ways in which they can do so.

    Daniel L Shufflebaum’s Program Evaluations Metaevaluation Checklist is worth looking at.

    {Shufflebeam, D. L. (1999) “Program Evaluations Metaevaluation Checklist”, based on The Program Evaluation Standards (University of Michigan)}

    Michael Scriven’s Key Evaluation Checklist is a useful resource. Scriven’s Checklist poses some challenging questions that are touched on here in good spirit {see Key

    a) Can you use control or comparison groups to determine causation of supposed effects/outcomes?

    b) If there is to be a control group, can you randomly allocate subjects to it? How will you control differential attrition, cross-group contamination, and other threats to internal validity.

    c) If you can’t control these, what’s the decision-rule for aborting the study? Can you single or double-blind the study.

    d) If a sample is to be used, how will it be selected; and if stratified, how stratified?

    e) If none of these apply, how will you determine causation (the effects of the evaluand)

    f) If judges are to be involved, what reliability and bias controls will you need (for credibility as well as validity)?

    g) How will you search for side effects and side impacts, an essential element in almost all evaluations?

    h) Identify, as soon as possible, other investigative procedures for which you’ll need expertise, time, and staff in this evaluation, plus reporting techniques and their justification.

    i) Is a literature review warranted to brush up on these techniques?

    j) Texts such as Schiffman and Kanuk’s Consumer Behaviour may provide some useful insights during the evaluative process.

    {Schiffman, Leon G and Kanuk, Leslie Lazar Consumer Behaviour. (1994) Prentice-Hall International Editions}

    As previously mentioned, The University of Alabama’s EVALUTALK site has a host of useful insights about evaluation design. As discussed by Fred Nichols o Distance Consulting, Recent discussions are focused on Roger Kaufman’s mega-planning model, based on his notion of needs assessment.

    “Logic models can be described as frameworks for thinking about (including evaluating a program in terms of its impact Stakeholders processes inputs etc. Typically these run from inputs through activities/processes to outputs/products outcomes/results and impact including beneficiaries.”
    {Fred Nichols, Senior Consultant, Distance Consulting on EVALUTALK, American Evaluation Association Discussion List [[email protected]];

    In response to Fred Nichols comments, Sharon Stone on the same EVALUTALK, comments on the assumptions that include program theory and external conditions (meaning factors not included that could affect positively or negatively the hypothesized chain of outputs, outcomes.

    Stone poses two questions:

    “Are these just “logical chains” – or are these cause the effect”

    Either way – are things really that simple – or do we need to pay more attention to those ‘external’ factors” – and how they are identified as external Patton (1980)248 has estimated over a hundred approaches to evaluation. He describes four major framework perspectives – the experimental, the economic, the developmental and the managerial.
    {Sharon Stone, Evaluator, on EVALUTALK, University of Alabama September 2007}

    See Patton (1980) “Qualitative Evaluation Methods”, London Sage, c/f Evaluation Purpose and Theory in

    Evaluating Health Interventions

    Patton claims:

    “One reason why evaluation can be confusing is that there are so many types of evaluation. Case- control, formative, summative, process, impact, outcome, cost utility, audit evaluations.”

    {See Patton, M. Q. (1997) Utilisation Focused Evaluation. The new Century text 3rd edn.}

    Funnel (1996) has some views on Australian practices in performance measurement. Her 1996 article in the Evaluation Journal of Australasia provides broad-brush review of the state of evaluation for management in the public service.

    Funnell provides explanations of jargon such as benchmarking, TQM, quality assurance and she also explores issues relating to the current political climate of progressive cutbacks and how these have affected the use of process evaluation. The form of process evaluation she is examining is seen as ‘managerial accountability p452).

    As well Funnell explores the impact of cutbacks on the conduct of evaluations, the levels of evaluation expertise available and on evaluation independence and rigor. Her arguments on the impact of market-based policies imply there could be both benefits and dangers.
    {Funnell S (1996): “Reflections on Australian practices in performance measurement.” 1980-1995. Evaluation Journal of Australasia 8(1), 36-48}

    Funnell S 1996. Reflections on Australian practices in performance measurement 1980–1995. Evaluation Journal of Australasia 8:36-48.

    {See also Eggleton IRC 1990 (revised 1994). Using performance indicators to manage performance in the public sector. Decision Consulting & New Zealand Society of Accountants: 1-124. c/f Australian Institute of Health and Welfare (AIHW) 2000. Integrating indicators: theory and practice in the disability services field. AIHW cat. no. DIS 17. Canberra: AIHW. (Disability Series)}; Particularly Appendix: Participation data elements from the draft National Community Services Data Dictionary.

    Hawe Degeling and Hall (1990) have some ideas of survey methods and questionnaire design.

    {Hawe, P., Degeling D., & Hall, J (1990) Evaluating Health Promotion, Ch 7 Survey Methods and Questionnaire Design, Sydney, McLennan & Petty}

    These authors describe random, systematic, convenience and snowballing sampling and look at questionnaire layout and presentation; the need for piloting and some simpler basic description analysis of quantitative and qualitative data. Fore more sophisticated analysis such as may be warranted before any decision is made by the Government to deregulate in the energy industry may warrant the employment of a highly trained researcher, recently trained.

    These authors examine a) the types of items; (b) questionnaire layout and presentation; (c) the need for piloting (this is often overlooked by evaluators undertaking small-scale evaluations; d) maximizing response rates.

    Note their comments on the analysis of quantitative and qualitative data. These comments describe simple, basic descriptive analysis. For more sophisticated analysis evaluators should employ a trained researcher.

    Funnel (1997) has discussed program logic as a tool for designing and evaluating programs. This is simply a theory about the causal linkages amongst the various components of a program, its resources and activities, its outputs, its short-term impacts and long-term outcomes.

    It is a testable theory, and must be made explicit as a first step to testing its validity.

    The process by which this is achieved is program analysis. This is a job for an expert in evaluation where major government policy is being reexamined.
    {Funnel S (1997) “Program Logic: An adaptable tool for designing and Evaluating Programs” in Evaluation News and Comment v.6(1) 1997 pp 5-17. Sue Funnell is Director of Performance Improvement Pty Ltd and chair of the AES Awards Committee.}

    As Funnel points out, the many models of program theory
    …. “date back to the 1970s and include amongst others Bennett’s hierarchy of evidence for program evaluation within the context of agricultural extension programs and evaluability assessment techniques developed by Wholey and others.”

    A typical program logic matrix may include a grid that includes ultimate and intermediate outcomes, and immediate impacts, with success criteria being measurable and specific in accordance with the SMART principles.

    {Ibid Funnel Program Logic, p5}

    One theme in the responses (TO EVALUTALK) as summarized by Johnny Morrell), is that

    “…..logic models can be seen as constructions that can be used to test key elements of a program’s functioning.

    Related to 1.1 is the notion that logic models can be seen in terms of path models in analytical terms.
    To me, this gets at the notion that while there is a useful distinction between “design” and “logic model”, the distinction is a bit fuzzy. Presumably, if one had enough data, on enough elements of a logic model, one could consider the logic model as a path model that could be tested.

    From a practical point of view, I still see logic models as guides for interpretation, and design as the logic in which we embed data to know if an observed difference is really a difference. But the distinction is not clean.

    Related to 1.1 is the notion that logic models can be seen in terms of path models in analytical terms. To me, this gets at the notion that while there is a useful distinction between “design” and “logic model”, the distinction is a bit fuzzy.

    Presumably, if one had enough data, on enough elements of a logic model, one could consider the logic model as a path model that could be tested.

    {American Evaluation Association Discussion List [[email protected]] as summarized by Johnny Morrell, PhD, Senior Policy Analyst, Member American Evaluation Association EVALUTALK Discussion Group}

    From a practical point of view, I still see logic models as guides for interpretation, and design as the logic in which we embed data to know if an observed difference is really a difference.

    But the distinction is not any given logic model is never anything more than a work in progress that has to be updated on a regular basis. With this approach, logic models (and the evaluation plans they drive), can be updated as the consequences of program action evolve.
    {Johnny Morrell on EVALUTALK, American Evaluation Association}.

    The major point in this category is that “design” means a lot more than a logic for looking at data. According to this view, “design” includes procedures for gathering data, schedules for doing evaluation tasks, and so on Johnny Morrell calls this:

    “an evaluation plan and reserve the term ‘design’ for the logical structure of knowing if observations have meaning.”

    {Ibid Johnny Morrell}

    There is a consensus amongst EVALUTALK members that:
    This task is typically undertaken by independent evaluators and can be a stand-alone evaluation if the only questions addressed focus on operational implementation, service delivery and other matters. This form of evaluation is often carried out in conjunction with an impact evaluation to determine what services the program provides to complement findings about what impact those services have.

    One example of a combined process and summation evaluation is shown in the study reported by Waller, A. E et al (1993)

    {Waller, A. E, Clarke, J. A., Langley, J. D. (1993). An Evaluation of a Program to Reduce Home Hot Water Temperatures. Australian Journal of Public Health (17(2), 116-23.}

    In that study, the summative component was inbuilt into the original program design. The findings were inclusive and relatively useless primarily because of flaws in conceptual assumptions made. However there were lessons to be learned in designing other similar studies, so the pilot study was not entirely wasted.

    Rossi examines outputs and outcomes as distinct components of an evaluative program, with the former referring to products or services delivered to program participants (which can be substituted for end-consumers) and with outcomes relating to the results of those program activities (or policy changes).

    Program monitoring can be integrated into a program’s routine information collection and reporting, when it is referred to as MIS, or management information system. In such a system data relating to program process and service utilization is obtained, compiled and periodically summarized for review.

    The University of Alabama’s EVALUTALK site has a host of useful insights about evaluation design. As discussed by Fred Nichols of Distance Consulting, Recent discussions are focused on Roger Kaufman’s mega-planning model, based on his notion of needs assessment.

    Patton (1980) has estimated over a hundred approaches to evaluation. He describes four major framework perspectives – the experimental, the economic, the developmental and the managerial.

    {Patton (1980) Evaluation Purpose and Theory}

    Patton claims:

    “One reason why evaluation can be confusing is that there are so many types of evaluation. Case control, formative, summative, process, impact, outcome, cost utility, audit evaluations.”

    {See Patton, M. Q. (1997) “Utilisation Focused Evaluation.” The New Century Text 3rd edn.}

    “the use of logic models (may be seen as) a consensus building tool. The notion is that logic models come from collaborative cross- functional input from various evaluator and stakeholder groups. Thus, the act of building a logic model works toward common vision and agreed upon expectations.”

    Swedish evaluator John Ovretreit (1987, reprinted 2005)257 has written a classic text on evaluative intervention. Though focused on health interventions, the principles are as relevant to other areas.
    Ovretreit (1997) Evaluating Health Interventions. Open University Press. McGraw-Hill (reprinted 2005).

    Rossi’s’ evaluation theory is about whether the intentions of the program were effected by delivery to the targeted recipients.

    {Rossi, P., Freeman and Lipsey, M. (1995) “Monitoring Program Process and performance: Evaluation: A Systematic Approach” (6th edition) Sage, pp 191-232}
    Funnel (1996) has some views on Australian practices in performance measurement. His 1996 article in the Evaluation Journal of Australasia provides broad-brush review of the state of evaluation for management in the public service.

    Funnell S (1996): “Reflections on Australian practices in performance measurement”, 1980-1995. Evaluation Journal of Australasia 8(1), 36-48

    Funnell provides explanations of jargon such as benchmarking, TQM, quality assurance and she also explores issues relating to the current political climate of progressive cutbacks and how these have affected the use of process evaluation. The form of process evaluation she is examining is seen as ‘managerial accountability p452)’.

    Swedish evaluator John Ovretreit (1987, reprinted 2005)263 has written a classic text on evaluative intervention. Though focused on health interventions, the principles are as relevant to other areas.

    Of quality assurance Davey and Dissinger said

    “Quality assurance (QA) and evaluation are complementary functions which collect data for the purpose of decision- making. At the process level, quality assurances provides both a system of management and also a framework for consistent ser4vice delivery with supporting administrative procedure. When implemented appropriately QA methods provide rapid feedback on services and client satisfaction, and a means to continuously upgrade organizational performance.

    Despite client feedback being part of QA, it lacks the depth provided by evaluation in determining individual client outcomes from a person centered plan for service delivery.”

    {Davey, R. V. and Dissinger, M (1999) “Quality Assurance and Evaluation: essential complementary roles in the performance monitoring of human service organisations.” Paper presented at Australasian Evaluation Society Conference, Melbourne 1999, p 534-550}

    In April 2008 Bill Fear as a regular online contribution to EVALUTALK, the American Evaluation Association Discussion Group April EVALUTALK, the American Evaluation Association Discussion Group on the topic of self-efficacy. His insights are topical so I quote them below:

    Why do policy makers make such bad policy most of the time? Why is good policy so badly implemented most of the time? Why don’t policy makers listen to honest evaluations and act on the findings? And so on.

    Could we actually bring about meaningful changes by giving people the tools to think things through and act accordingly? Does empowerment actually mean anything? (Well, yes, but it seems to lack substance as a term in its own right.)

    Does anybody ask these questions? Or is everybody just concerned with the latest methodology which will always be historic not least because it can only be applied to the past (there is an argument there).

    I digress. The point is, to my mind at least, the importance of self-efficacy in the field of evaluation has been overlooked at our expense.”

    {Note: Bill Fear, BA (Education) MSc (Social Science Research Methods), PhD (Cognitive Psychology). Member UK Evaluation Society. He sits on the UKES council, and the American Evaluation Association.

    He has excellent research and evaluation experience, as well as solid grounding in PRINCE project management. He has attended top level training programs in the US with both Michael Scriven and Michael Patton. Recent experience include working for the Office for National Statistics where he led a large index rebasing project, and helped set up the development of both a banking and insurance index for the corporate sector. He is currently running the Investing in Change project (a Wales Funders Forum project). This project is using an evaluation framework to explore funding of the voluntary sector from a funders perspective.

    A recent achievement in this includes building a partnership with the Directory of Social Change to deliver a Funding Guide for Wales. He presents workshops on the emerging findings of this project to a wide range of policy makers. He is frequently asked to comment on evaluation methodology and proposals.}
    As discussed in my 2007 submission to the AEMC’s Retail Competition Review, The Companion Wallis Consulting Retailer and Consumer Surveys identified fairly well matched perceptions according to the summary comparative findings. Awareness levels amongst consumers besides knowing of the ability to choose, as clearly extremely low.

    MK Comment:
    On the subject of energy is a low engagement commodity/service, active marketing is necessary with product differentiation and attractive offers including a range of convenience options or discount packages.

    MK Comment:

    Evaluation and analysis factors impacting on market failure. Interpretations that switching conduct is predictive of real outcomes in an unstable market are yet to be substantiated.

    Much discussion on the Productivity Commission site and in responses to AEMC and other consultative processes has focused on behavioural economics and the value of superficial evaluation of switching conduct. I will not repeat those arguments here, save to say that the data relied upon does not appear to robustly embrace these principles.

    Again I explain that my role was to source collate and present the above material. All the reasoning goes to the credit of the professional evaluators who took part in the EVALUTALK discussions highlight, or were authors of books and papers that I have personally read and felt would be useful to highlight these as matters of topical interest, more so as health policy and governance is high on the agenda for reform. My thanks to those who made it possible for me to get this together and publish it for a public use.

    Prepared and collated by Madeleine Kingston

    Cheers

    Madeleine

  12. “So there are lots of ways in which public servants could blog. And yes, there are ways they shouldn’t blog.”

    Wonderful Nick. The mostly corporatised ex-public service still exists in a vaccuum.

    Nothing like a good ‘out’ is there?

    Such saves explaining so much and avoids involvement with anything causing detriment to the citizenry.

    I have a memory of my father’s lot of colleagues back last century.

    They were archetypal public servants – technicians, they were.

    Khaki shorts just covering their knobby knees – walk sox and serviceable shoes.

    Buff shirts with epaulettes – and they all smoked pipes.

    They all cut quite the figure and always answered with a “Harrrumph” – or – “Phnutttt” – without anything like any adequate answer to a direct question.

    It is only recently that I’ve worked through what absolute cowards they were and how, compared to them, their replacements in the modern corporatised agencies are infinitely worse.

    After corporatisation they have become subsumed to a bitter occupational environment and therefore infinitely less inclined to say a word.

    Wonderful commonwealth this to sell off and put into a sort of slavery the very people who work hard to keep the place going.

  13. Gosh Bunsen. So cynical. And so angry.

    Now I will admit to being disillusioned. But I am hoping that things will change for us all for the better. The time is ripe for a big shift and a re-look at policies, how they are formed and whether they are sustainable.

    The dialogue has to start somewhere.

    I am pleased about Gov2 and the initiatives being taken to create a fairer more open system of governance. If blogging is help to get a meaningful dialogue going, or even just one that helps people stay connected, we should all keep an open mind about outcomes.

    Cheers

    Madeleine

  14. Gosh Bunsen. So cynical. And so angry.”

    And so I shall reply, thus -

    Bless you!
    No gosh about it Madeleine: I may be reasonably well advised and of a certain age, therefore possessing a certain fund of experience and life skills .
    One thing I am not is angry.
    Believe me when I say that anger dilutes effort and the total concentration upon one’s objectives.

    Yes. I’m an engineer and after a fairly reasonable lifetime of squashing gremlins out of machinery – the stupidity of humankind is just a background noise.
    But why do you say that you’re disillusioned?

    I note your tome above.
    After the strength of all that you should be sitting back and having a break, a cigar, surely?
    But just between you and me I gain the inkling that you can’t quite believe all that effort was worth it.
    Right?
    Because while as you say and with which I wholeheartedly agree – “The dialogue has to start somewhere.” – we both know the cards are stacked so high that either they barricade any other view or if even somehow collapsed we end by smothering under their weight.

    Your final paragraph is the one that gives me hope while at the same time giving me so much concern.
    It all does turn around freedom of speech and in my experience little of that exists here already without the proposed Federal censorship.
    Take my advice if you would.
    There are things I have a pressing need to say about how completely useless and utterly counterproductive the whole turnout has been in Oz since 1788.
    While that may sound rash I admit of a certain age and therefore have my own ‘track record’ combined with my seniority, which combined with my background, once gave me a certain credence ‘in court’.
    Putting it more bluntly – what I expected and did my best to strive towards has been utterly subsumed to the overweening stupidity of the temporary raffle winners of the election process.
    Particularly in Queensland their sort have overtaken the ‘Divine right of Kings’ and have managed to turn what was anyway nothing more than a glorified penal colony into something immensely worse.
    But having spoken freely, I suppose that I shall be immediately moderated into extinction.

    If I survive ‘moderation’ I look forward to your reply.
    Tell me whether I am either angry or of knowledge of the fact?

  15. Hi again Bunsen

    Have sent my first reply to the above. It awaits moderation, as does the second part of my hatrick evaluation blog to which you responded previously.

    I agree that Oz needs revamping, and that the debate about more appropriate policies should be receiving top priority in so many arenas.

    I share your concerns about Queensland in more ways than you know but if I tried to explain here I may risk permanent ban from Club Troppo so won’t risk it. Alas Queensland is not the only problem.

    I am still trying to work out how best to avoid moderation since I naturally want to be seen to be friendly and courteous but willing to exercise my constitutional right to speak freely – like you.

    Keep up the good work. A little black humor hurts no-one.

    Cheers

    Madeleine (Kingston)

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Notify me of followup comments via e-mail. You can also subscribe without commenting.