How to be a Good Adviser by Playing Pretend

Upon leaving a job about a year ago, at my going away party one of my friends and coworkers asked me if I had any advice to pass on to the team. At the time, I stated that my advice was not to give generalized advice without a specific issue in mind, because it wouldn’t contain actionable information that would improve the receiver’s experience. With the benefit of time, I can see that there are a few more wrinkles to discuss regarding advising.

Most of my early experience with advising was from my school and university years. Later, I’d go on to advise my friends on their business ventures by asking questions then following up with more questions. I’ll disclose a caveat to my thinking on advising: I’ve never been so keen on asking for advice because of all the bad advice I’ve received over the years. My negative advising experiences have given me a lot of ideas to chew on, though.

There is a distinction between offering a piece of advice, and being an actual adviser, and for this piece I’ll touch on both, with an emphasis on the latter.  I’d like to revisit that sentiment and delve a little bit deeper. Before I do, a brief discussion of what advice is and what advisers are is in order.

Generally speaking, people are familiar with the concept of taking advice from others regarding areas outside their expertise. Additionally, people are usually comfortable with the idea of providing advice to others when prompted– and, frequently to the frustration of others, when they are not prompted. Advice is the transfer of topical information or data by a third party to a person looking for a good outcome. A large volume of our communications are offering, requesting, or clarifying advice.

The concept of advice as information will be familiar to almost everyone. Frequently, the topical information that is elicited by a request for advice is anecdotal. If the adviser is careless or not directed, the anecdotal information offered to the advised may merely be tangentially related or actually unrelated to the issue at hand. Not everyone pays close attention to their outgoing advice if they have no skin in the game. The main problem with anecdotal evidence is that it refers to specific instances of a trend rather than the rules which govern that trend. Yet, most advice is anecdotal, perhaps as an artifact of humanity’s sensitivity to personal stories rather than hard data or universal laws.

Informally, it’s nearly impossible to escape anecdotal evidence when requesting or giving advice. Frequently, an adviser will forgo telling the actual anecdote, and skip right to the advice that they have distilled from their own experience, leaving the advised with an even more incomplete view. This has predictable consequences when paired with people’s tendency to do as others tell them. Using an incomplete group of anecdotes culled from the experience of others and processed from an uncomfortable position of ignorance, decisions are made based on the emotions of others rather than clear-headed analysis.

I am sure nearly everyone has received completely heartfelt yet completely detrimental advice in their time. If we are lucky, we avoid the consequences of receiving bad advice and catch the mistakes of our advisers in time to reject their thoughts and prevent internalization. If we are unlucky, we follow the path to nowhere and are upset with the results.

Part of maturity is understanding that while others are capable of delivering bad advice, we too are likely to give bad advice if given the chance. We don’t have to commit to delivering advice if we don’t feel qualified, nor do we have to ask for advice or follow advice once given. Advice is just a perspective on an issue, and not all perspectives are equal.

Critically, good advice is specific and actionable rather than vague. If the best that an adviser can do is offer a general direction to follow up on, you’re outside the realm of their experience or outside the amount of effort they’re willing to invest in you. A typical red flag for bad advice is that it’s delivered quickly, sleepily, or nearly automatically.

Good advising is extremely effort intensive! Rid yourself of advisers that don’t respect you enough to apply themselves fully. In my experience, the prototypical awful adviser is coerced into the role rather than choosing it themselves. University advisers are the worst example of being forced into advising. Identify which advisers are around only because they’re required to be, and then avoid them and their bad advice.

So, how are we going to limit our ability to deliver bad advice and maximize our delivery of good advice? Should we simply stonewall all requests for advice and refuse to ask others for help? I don’t think that this is the answer, because advice is one of the principle ways in which we can share the experiences of others and make use of experiences that we have not had ourselves. Sharing experiences is a critical component to being human, and it’s unlikely that we could stop even if we tried.

The way that I propose to avoid delivering bad advice and to actually deliver good advice is to use a mind-trick on ourselves. The mind-trick that I am referring to is playing pretend. First, I’ll need to build a mental image of the thing I want to pretend to be– the best possible adviser– then when it’s time to give advice, I’ll be able to pretend to be the embodiment of the image and put myself in the correct mindset for delivering good advice. After I’ve built the barebones of this mental image, taking it out for a test run with a hypothetical request for advice will help to fill in the details and also provide a template for how to think when it’s time to deliver real advice.

What are the properties of this mental image of the ideal adviser? I think that the perfect adviser is a professorial figure, and so adopting an academic tone and patient, receptive train of thought is necessary. Advising someone else shouldn’t be careless or haphazard, so the perfect adviser should mentally state an intention to provide their undivided and complete attention to the pupil for the duration of the session. The aim is to achieve a meditative focus on the present where the power of the adviser’s knowledge and experience can act without interference. The adviser is never emotional. Value judgments are deferred or unstated; the details and the pupil are at the forefront.

In order to advise properly, this professorial type will know the limits of his knowledge as well as his strong points, and will weight his statements to the pupil in accordance with how much he really knows, making sure to be precise with his language and to qualify his statements. Reaching the limits of the adviser’s knowledge isn’t something to be ashamed of, as it’s an interesting challenge for the ideal adviser to chew on.

The aim of the perfect adviser is to consider the particular details of the situation of his pupil, relate them to the universal trends which the adviser has uncovered with conscious effort, and then use a combination of the universal trends and the particulars of the pupil to offer a prescription for action. The mental image of the adviser will explicitly recite the universal trends to himself as he ponders the direction to indicate to his pupil. The conversation between the pupil and the adviser is marked by long pauses as the adviser takes the time to call critical trends and details into his working memory so that the pupil may make use of them. Advising is a conversation that can’t be rushed, because the adviser might forget to make an important connection of communicate in a precise way. The best advising has no time limit.

With each stanza of conversation, the adviser will find that his idea of the prescription in progress is stalled by a facet of the pupil’s situation which hasn’t been discussed. The adviser asks deeply focused questions which will unblock the progress of making his advice draft. The draft will have to be completely reworked in light of information gathered from the pupil. Once the draft is completed, the adviser will ask validating questions to see whether their draft is workable and realistic. Upon validation, the adviser will deliver the draft in a reassuring yet detached fashion.

I actually use this mental image when I’m called on to give advice, and I think it helps a lot. “Playing pretend” is just a convenient way of stepping into a foreign mindset without getting too self conscious. The important takeaway here is that the mindset of being a good adviser is very different from our normal range of thought because it is both clinical and creative. Clinical in the sense that facts and particulars are recognizable within a general framework, and creative in the sense that the solution to the clinically described problem probably doesn’t have a pre-established treatment.

Advising is a skill that can be learned and perfected, though it’s seldom prioritized. I think that prioritizing becoming a good adviser is absolutely essential if you think that giving advice is a core part of what you do. For the most part, “first do no harm” is a maxim that I wish more advisers practiced. If you liked this article, follow me on Twitter @cryoshon and check out my Patreon page! I’ll probably revisit this article when I have a bit more experience advising.

 

 

Advertisement

How to Ask A Good Scientific Question

One of the first tasks a scientist or curious person must undertake before experimentation is the formulation and positing of a scientific question. A scientific question is an extremely narrow question about reality which can be answered directly and specifically by data. Scientists pose scientific questions about obscure aspects of reality with the intent of discovering the answer via experimentation. After experimentation, the results of the experiment are compared with their most current explanation of reality, which will then be adjusted if necessary. In the laboratory, the original scientific question will likely take many complicated experiments and deep attention paid before it is answered.

For everyone else, the scientific question and experimental response is much more rudimentary: if you have ever wondered what the weather was like and then stepped outside to see for yourself, you have asked a very simple and broad scientific question and followed up with an equally simple experiment. Experiments render data, which is used to adjust the hypothesis, the working model that explains reality:  upon stepping outside, you may realize that it is cold, which supports your conception of the current time being winter.

Of course, a truly scientific hypothesis will seek to explain the ultimate cause as well as the proximate cause, but we’ll get into what that means later. For now, let’s investigate the concept of the hypothesis a little bit more so that we can understand the role of the scientific question a bit better.

Informally, we all carry countless hypotheses around in our head, though we don’t call them that and almost never consider them as models of reality that are informed by experimentation because of how natural the scientific process is to us. The hypotheses we are most familiar with are not even mentioned explicitly, though we rely on them deeply; our internal model of the world states that if we drop something, it will fall.

This simple hypothesis was likely formed early on in childhood, and was found to be correct over the course of many impromptu experiments where items were dropped and then were observed to fall. When our hypotheses are proven wrong by experimentation, our response is surprise, followed by a revision of the hypothesis in a way that accounts for the exception. Science at its most abstract is the continual revision of hypotheses after encountering surprising data points.

If we drop a tennis ball onto a hard floor, it will fall– then bounce back upward, gently violating our hypothesis that things will fall when dropped. Broadly speaking, our model of reality is still correct: the tennis ball does indeed fall when dropped, but we failed to account for the ball bouncing back upward, so we have to revise our hypothesis to explain the bounce. Once we have dropped the tennis ball a few more times to ensure that the first time was not a fluke, we may then adjust our hypothesis to include the possibility that some items, such as tennis balls, will bounce back up before falling again.

Of course, this hypothesis adjustment regarding tennis balls is quite naive, as it assigns the property of bouncing to certain objects rather than to a generalized phenomena of object motion and collision. The ultimate objective of the scientific process is to resolve vague hypotheses into perfect models of the world which can account for every possible state of affairs.

Hypotheses are vague and broad when first formed. Violations of the broad statements allow for clarification of the hypothesis and add detail to the model. As experiments continue to fill in the details of the hypothesis, our knowledge of reality deepens. Once our understanding of reality reaches a high enough level, we can propose matured hypotheses that can actually predict the way that reality will behave under certain conditions– this is one of the holy grails of scientific inquiry. Importantly, a prediction about the state of reality is just another type of scientific question. There is a critical caveat which I have not yet discussed, however.

Hypotheses must be testable by experimentation in order to be scientific. We will also say that hypotheses must be falsifiable. If the hypothesis states that the tennis ball bounces because of magic, it is not scientific or scientifically useful because there is no conceivable experiment which will tell us that “magic” is not the cause. We cannot interrogate more detail out of the concept of “magic” because it is immutable and mysterious by default.

Rather than filling in holes in our understanding of why tennis balls bounce, introducing the concept of magic as an explanation merely forces us to re-state the original question, “how does a tennis ball bouncing work?” In other words, introducing the concept of “magic” does not help us to add details which explain the phenomena of tennis balls bouncing, and ends up returning us to a search for more details. In general, hypotheses are better served by only introducing new concepts or terminology when necessary to label the relation of previously established data points to each other. The same could be said for the coining of a new term.

Now that we are on the same page regarding the purpose of scientific questions– adding detail to hypotheses by testing their statements– we can get into the guts of actually posing them. It’s okay if the scientific question is broad at first, so long as increasing levels of understanding allow for more specific inquiry. The best way to practice asking a basic scientific question is to imagine a physical phenomenon that fascinates you, then ask how it works and why. Answering the scientific question “why” is usually performed by catching up with previously performed research. Answering “how” will likely involve the same, although it may encounter the limit of human knowledge and require new experimentation to know definitively. I am fascinated by my dog’s penchant for heavily shedding hair. Why does my dog shed so much hair, and how does she know when to shed?

There are actually a number of scientific questions here, and we must isolate them from each other and identify the most abstract question we have first. We look for the most abstract question first in order to give a sort of conceptual location for our inquiry; once we know what the largest headline of our topic is, we know where on the paper we can try to squint and resolve the fine print. In actual practice, finding the most abstract question directs us to the proper body of already performed research.

Our most abstract question will always start with “why”. Answering “why” will always require a more comprehensive understanding of general instances that govern the phenomena in question, whereas “what” or “how” typically refers to an understanding that is limited to a fewer instances. So, our most abstract question here is, “Why does my dog shed so much?”

A complete scientific explanation of why the dog sheds will include a subsection which describes how the dog knows when to shed. Generally speaking, asking “why” brings you to the larger  and more comprehensively established hypothesis, whereas asking “how” brings you to the more narrow, less detailed, and more mechanistic hypothesis. Answering new questions of “why” in a scientific fashion will require answering many questions of “how” and synthesizing the results. When our previously held understanding of why is completely up-ended by some new explanation of how, we call it a scientific revolution.

At this point in human history, for every question we can have about the physical world, there is already a general hypothesis which our scientific questions will fall under. This is why it is important to orient our more specific scientific questions of “how” properly; we don’t want to be looking for our answer in the wrong place. In this case, we can say that dogs shed in order to regulate their temperature.

Temperature regulation is an already established general hypothesis which falls under the even more general hypothesis of homeostasis. So, when we ask how does the dog know when to shed, we understand that whatever the mechanistic details may be, the result of the sum of these details will be homeostasis of the dog via regulated temperature.

Understanding the integration between scientific whys and hows is a core concept in asking a good scientific question. Now that we have clarified the general “why” by catching up with previously established research, let’s think about our question of “how” for a moment. What level of detail are we looking for? Do we want to know about the hair shedding of dogs at the molecular level, the population level, or something in between? Once we decide, we should clarify our question accordingly to ensure that we conduct the proper experiment or look for the proper information.

When we clarify our scientific question, we need to phrase it in a way such that the information we are asking for is specific. A good way of doing this is simply rephrasing the question to ask for detailed information. Instead of asking, “how does the dog know when to shed”, ask, “what is the mechanism that causes dogs to shed at some times and not others.”

Asking for the mechanism means that you are asking for a detailed factual account. Indicating that you are interested in the aspect of the mechanism that makes dogs shed at some times but not other times clarifies the exact aspect of the mechanism of shedding that you are interested in. Asking “what is” can be the more precise way of asking “how.”

The question of the mechanism of shedding timing would be resolved even further into even more specific questions of sub-mechanisms if we were in the laboratory. Typically, scientific questions beget more scientific questions as details are uncovered by experiments which attempt to answer the original question.

As it turns out, we know from previous research that dog shedding periods are regulated by day length, which influences melatonin levels, which influences the hair growth cycle. Keen observers will note that there are many unstated scientific questions which filled in the details where I simplified using the word “influences”.

Now that you have an example of how to work through a proper scientific question from hypothesis to request for details, try it out for yourself. Asking a chain of scientific questions and researching the answers is one of the best ways to develop a sense of wonder for the complexity of our universe!

I hope you enjoyed this article, I’ve wanted to get these thoughts onto paper for quite a long time, and I assume I’ll revisit various portions of this piece later on because of how critical it is. If you want more content like this, check out my Twitter @cryoshon and my Patreon!

How to Become a Smarty Pants

There’s been a small amount of interest that I’ve seen in a few communities regarding building status as an “intellectual” in the colloquial sense, and I think it’s probably more correct to say that people would rather be perceived as smart than as dumb, which is completely fair.

This article could also be called “How to Look and Sound Like an Intellectual” although frankly that implies a scope that is much larger than anything I could discuss. So, we have a lighthearted article which purports to transform regular schlubs into smarty pants, if not genuinely smart people. If you already fashion yourself as a smarty pants, read on– I know you’re already into the idea of growing your capacities further. Hopefully my prescription won’t be too harsh for any given person to follow if they desire.

While it seems a bit backward to me to desire a socially assigned label rather than the concrete skills which cause people to give that label to others, building a curriculum  for being a smarty pants seems like an interesting challenge to me, so I’ll give it a shot. I hope that this will be a practice guide on how to not only seem smarter, but actually to think smarter and maybe even behave smarter. The general idea I’m going to hammer out here is that becoming an intellectual is merely a constant habit of stashing knowledge and cognitive tools. The contents of the stash are subject to compound interest as bridges between concepts are built and strengthened over time.

In many ways, I think that being a smarty pants is related with being a well rounded person in general. The primary difference between being seen as an intellectual and seen as a well rounded person is one of expertise. The expertise of an intellectual is building “intellect”, which is an amorphously defined faculty which lends itself to making witty rejoinders and authoritative-sounding commentary. There’s more to being a smarty pants than puns and convincing rhetoric, though: smarty pants everywhere have been utilizing obscure namedropping since the dawn of society. Playtime is over now, though. How the heck does a person become a smarty pants instead of merely pretending to be like one?

Being a smarty pants is a habit of prioritizing acquisition of deep knowledge over superficial knowledge. Were you taught the theory of evolution in school? Recall the image that is most commonly associated with evolution. You probably picked the monkey gradually becoming a walking man, which is wrong. The superficial knowledge of the idea that humans and monkeys had a common ancestor is extremely common, but the deeper knowledge is that taxonomically, evolution behaves like a branched tree rather than a series of points along a line.

See how I just scored some smarty pants points by taking a superficial idea and clarifying it with detailed evidence which is more accurate? That’s a core smarty pants technique, and it’s only possible if you have deep knowledge in the first place. Another smarty pants technique is anticipating misconceptions before they occur, and clearing them up preemptively. How should you acquire deep knowledge, though?

Stop watching “the news”, TV, movies, cat videos, and “shows”. Harsh, I know– but this step is completely necessary until a person has rooted themselves in being a smarty pants. This media is intended to prime you for certain behaviors and thoughts, occupy your time outside of work, and provide a sensation of entertainment rather than enriching your mind. The more you consume these media, the less your mind is your own, and the more your mind is merely a collection of tropes placed there by someone else. Choosing to be a smarty pants is the same as choosing isolation from the noise of the irrelevant.

For the most part, these media are sources of superficial information and never deep information. You can’t be a smarty pants if you’re only loaded with Big Bang Theory quotes, because being a smarty pants means knowing things that other people don’t know and synthesizing concepts together in ways that other people wouldn’t or couldn’t. There is zero mental effort involved in consuming the vast majority of these media, even the purported “educational” shows and documentaries which are largely vapid. Seeing a documentary is only the barest introduction to a topic. Intellectuals read, then think, then repeat.

I guess I’ve said some pretty radical things here, but try going back and viewing some media in the light I’ve cast it in. There are exceptions to the rule here, of course: The Wire, The Deer Hunter, American Beauty, or an exceptionally crafted documentary. The idea is that these deeper works are mentally participatory rather than passively consumed; the depth and emotionality that the best audiovisual media convey can be considered fine art, and smarty pants love fine art. During your smarty pants training, I would still avoid all of the above, though. Speaking of your smart pants training…

Stop reading “the news”, gossip of any kind, Facebook, Twitter, clickbait articles, and magazines.  These things are all motherlodes of superficial information. As Murakami said truthfully, “If you only read the books that everyone else is reading, you can only think what everyone else is thinking.” This concept is absolutely critical because an intellectual is defined by depth of thought, quality of thought, and originality of thought relative to the normal expectation. Loading up on intellectual junk food is useless for this purpose, so get rid of it and you will instantly get smarter.

Noticed how I namedropped Murakami there? That’s worth smarty pants points because it’s conceptual tie in that is directly relevant to the point I’m trying to make, and expresses the idea more elegantly than I could on my own. Don’t just namedrop obscure people wildly, as you’ll look more like a jackass than a smarty pants, though the line is blurry at times. Being a fresh-faced smarty pants frequently involves making the people around you feel inadequate, but it shouldn’t when practiced properly!

The purpose of self-enrichment is for self-benefit, and should not be used for putting down others. Frequently, knowledge may be controversial or unwelcome, so begin to be sensitive to that when conversing with others. Life isn’t a contest for who can show off the most factual knowledge– but if it were, a good smarty pants would be in the running for the winner, and that’s your new goal.

Pick an area that will be your expertise. Pick something you will find interesting and can learn about without laboring against your attention capacity. This should be distinct from a hobby. Which topic you address is up to you, but I’d highly suggest approaching whatever topic you choose in a multi-disciplinary manner. If you’re interested in psychology, be sure to devour some sociology. If you’re interested in biology, grab some chemistry and physics. If you’re a philosopher, try literature or history. Your expertise in your chosen field will mature over time, and eventually you should branch out to gain expertise in a new field.

The idea here is that the process of picking an area of expertise is useful to the smarty pants. By evaluating different areas, the smarty pants will get a feel for what they’re interested in, what’s current, and what’s boring. The most intellectually fruitful areas of expertise have a lot of cross-applicability to other areas and concepts, an established corpus of literature, and a lot of superficial everyday-life correlates. Suitable examples of areas of expertise are “the history of science” or “modern political thought”. An unsuitable example of an area of expertise would be “dogs” or “engine design”. Unsuitable areas of expertise aren’t applicable to outside concepts and don’t confer new paradigms of thought.

Start reading books, in-depth articles, and scholarly summaries on topics which you want to develop your expertise in. A smarty pants has a hungry mind and needs a constant supply of brain food, which is synonymous with deep knowledge. Reading books and developing deep knowledge is never finished for the aspiring smarty pants. Plow through book after book; ensure that the most referenced scholarly works or industrial texts are well-understood. Understand who the major thinkers and groups are within the area of expertise, and be able to explain their thoughts and relationships. Quality is the priority over quantity of information, however.

Merely stopping the flow of bad information in and starting a flow of good information isn’t enough to be a real smarty pants, though it’s a good start. In order to really change ourselves into smarty pants, we must change our way of engagement with the world. As referenced before regarding media consumption, a smarty pants must interrogate the world with an active mind rather than a passive mind. What do I mean here?

A passive mind watches the world and receives its thoughts as passed from on high. Passive minds do not chew on incoming information before internalizing it– we recognize this the most pungently when a relative makes regrettable political statements culled directly from Fox News. An active mind is constantly questioning validity, making comparisons to previous concepts, and rejecting faulty logic. An active mind references the current topic with its corpus of knowledge, finding inconsistencies.

Creating an active mind is an extremely large task that I’ll probably break into in another full article, but suffice it to say that the smarty pants must get into the habit of chewing on incoming information and assessing its value before swallowing. Learning how to think/write systematically and disagree intelligently are probably both skills that a smarty pants can make use of.

Speaking of relatives, a smarty pants needs to have good company in order to grow. Ditch your dumb old friends and get some folks who are definitely smarter than you– they exist, no matter what you may think of yourself. You don’t really need to ditch your old friends, but you really do need to get the brain juices flowing by social contact with other smarty pants. There are many groups on the internet which purport to be the home of  smart people, but my personal choice is HackerNews.

It’ll hurt to feel dumb all the time, but remember that feeling dumb means that you are being exposed to difficult new concepts or information. Feeling dumb is the ideal situation f0r an aspiring smarty pants because feeling dumb means that you are feeling pressure that will promote growing to meet the demands of your environment. Every time you feel dumb, catch the feeling, resolve the feeling to an explicit insecurity, then gather and process information until that insecurity is squashed by understanding. Like I said before, this step is unpleasant, but nobody said being a smarty pants was easy.

This concludes my primer on how to be a smarty pants. I’ll be writing more on this topic, though a bit more seriously and more specifically. I’d really like to publish a general “how to think critically” article in the near future, and of course critical thinking is a core smarty pants skill. I have a reading list for the most general and abstract “smarty pants education” that I’ll be publishing relatively soon as well. Until then, try practicing the bold points here.

Be sure to follow me on Twitter @cryoshon and check out my Patreon page!

How to Understand and Provide Praise and Criticism at Work

The issues of praise and criticism in the workplace are especially important for employee morale– after all, it feels bad to be criticized and feels good to be praised. The effects of praise and criticism are cumulative, so each must be given carefully and in a targeted, effective fashion. Praising irrelevant or inconsequential attributes of a coworker’s work won’t be as effective as choosing the correct target. By the same token, we all know that feelings of indignation and hurt occur when we feel that we have been criticized unjustly. Of course, we may not be so happy when we receive accurate criticism either. This article is my attempt at biting into the concepts of workplace criticism and praise, attempting to tease out the actual psychological phenomena, and offering a constructive path forward that will provide superior quality communication.

First, let’s define criticism and praise. Criticism and praise are after-the-fact identification of priorities, effort invested, and outcome accomplished relative to prior expectations. Praise is an observation that the ordering of priorities, effort invested, and outcome accomplished were more successful than expectations beforehand. Criticism is identification that priorities were not what was expected, and as a result the effort invested may have been insufficient or misplaced, leading to an unexpected outcome that fell short. Neutral observations that are neither exactly criticism nor praise are likely to be identifications of unexpected priority placement or effort investment which did not have an explicitly positive or negative outcome.

By this definition, the two concepts of criticism and praise are in fact the same concept popularly called “feedback” in the corporate doublespeak. I don’t like the term feedback because it’s nonspecific and is frequently a euphemism for criticism because people are afraid of the word itself due to its emotionally harmful connotations. The fact that the word “criticism” has become verboten is an indictment on the disastrous state of communications skills in corporate life. Discussions of workplace priorities should not spur anxiety within employees, yet it is so. The knowledge of employee discomfort over receiving criticism has spurred the creation of many different investigations into various aspects of criticism, but many employees still struggle.

We should not fear criticism at work– criticism is merely a type of social signalling which indicates that our work priorities were inconsistent with what was expected by others. Adopt a detached mindset, and accept that if we never received praise or criticism because our priorities were always exactly in tune with everyone else, we would be closer to ants than humans.  We should not fear praise, either!

An inability to accept praise or a rejection of praise at work is merely a fear of admission that individual priorities were not the same as what was expected. A fear of criticism is frequently mirrored by a fear of praise because both pertain to individual deviation from expectation and thus a violation of social conformity. It is human nature to be conformist, so we can forgive an inbred tendency to avoid ostracization from the group, but we must overcome this tendency if we want to be part of a maximally effective team or organization.

Effective teams and organizations have a shared frame of priorities, which means that identifying deviations from those priorities is important for keeping on the right track. In this sense, we actually need a certain minimum amount conformity in order to accomplish our group’s goals. With that being said, I am of the opinion that too much conformity is typically far more harmful than too little— a team that is incapable of deviating from expectation is stagnant and inflexible.

So, how do we deliver criticism and praise in such a way that the people we deliver it to get the most helpful impact? The biggest unstated misconception that I regularly come across is that criticism and praise can be doled out without reference to the receiving person. I would like to rectify this misconception, perhaps controversially: the most effective criticism or praise will be carefully calibrated based off of what the receiving person prioritized when performing the work. 

Let’s unpack that statement. In order to get the biggest psychological impact in the desired direction (more efficacy and team cohesion), we have to understand and empathize with our coworker. We have to get into their head.

Why do you think they prioritized what they prioritized, and does this explain the outcome? What aspect of their work did they seem to have put the most effort into, and what part do they seem proud of? Do they seem anxious, ashamed or avoidant of certain prioritizations or aspects of their work? Why would they feel this way? It helps to have the coworker reiterate exactly what they think the expectations were for a given project.

Identifying insecurities regarding the work in question is a good starting point if the above questions are inscrutable. Frequently during discussions of their work, people will provide clues which indicate that they suspect their actual prioritizations are different from the expected prioritizations that may have been agreed upon at the start of a project. Suspicion of differing priorities does not mean that the person should be criticized! Frequently, refutations of expectation are positive, and are indicative of individual initiative and creativity. Individual initiative and creativity have their time and place, however; certain projects may be too sensitive or intolerant of deviation for an individual’s flair to have a positive impact.

Once you’ve identified points where a coworker’s prioritization or effort invested deviated from the original vision of the team, you have identified a point for criticism or praise. Examine the outcome compassionately: did the coworker’s choice seem as though it would be fruitful at the time? If there was really no need or leeway to reprioritize, and the outcome was worse than what was expected, they have earned criticism because it was the incorrect time for their creativity. Was the unexpected investment of effort fruitful in a surprising way while still accomplishing the original desired outcome? Time for praise.

The trick is to keep your criticism and praise limited, detached, and extremely topical. Find the points of individual initiative that the coworker took while working. If your coworker prioritized the wrong thing which led to a bad outcome, detail the logical chain for them if they aren’t aware that there was a problem. Did recalculating the sales from November waste valuable time that could have been spent compiling those sales into charts? Say so clearly and gently, giving your coworker acknowledgement for creativity but not shying away from the problem: “Though you are right that it’s essential for our data to be correct, prioritizing recalculating the sales from November instead of compiling those sales into charts led to a duplication of previous work which contributed toward us missing our deadline.”

Praise should follow the same formula, provided that the outcome was acceptable.  “Choosing to prioritize recalculation of the sales data over compiling the data into charts allowed us to catch a number of mistakes that we would not have otherwise.” Keep both praise and criticism impersonal! The objective of evaluating your coworker’s work is not to quantify their worth as a human being or “human resource” but rather to identify where their individual decisions were compatible with the objective of the team. Accept their choices as compartmentalized pieces on a per-project basis, then look for trends later on if you’re inclined.

Tone and body language are critical to giving and receiving praise and criticism, too. Because of how uncomfortable people are discussing deviations from expected priorities,  defensive body posture and clinical prescriptive tone occur very frequently on both sides of the table when evaluation time comes around. Making a conscious effort to avoid these harbingers of poor communication is absolutely essential! People will detect defensive or vulnerable body language and tone and mirror it when they piece together that criticism is inbound.

Instead, opt for open body language. Signalling warmth and having a benign disposition helps to prevent the other person from clamming up into a defensive posture and allows for praise and criticism to be fully analyzed without emotion. Tone of voice is a bit harder to remember to regulate, but should be carefully considered as well.

Praise should be delivered with a positive and serious tone– adopting a nurturing or parental voice is the most common mistake here. Workplace praise is not the same type of communication as praising your dog for returning its toy or your child for a good report card; workplace praise is clear-sighted objective recognition of successful individual task reprioritization. Praise for a good outcome is not personal, and shouldn’t be confounded by a friendly office relationship.

Criticism should also be delivered with a (slightly less) positive and serious tone. Remember, the purpose here is not to tear the other person down, or talk down to them, but rather to show them that their priorities caused outcomes that were not consistent with the team’s original purpose. Criticism should be delivered at normal speaking volume, and abstracted far away from any frustration you may feel.

A frustrated tone from you will cause the other person to grow defensive, and the maximum positive impact of criticism will not be achieved. A tone of simpering or crestfallen disappointment when delivering criticism will not do: personal emotions or discomfort are not relevant to the discussion of expected priorities and outcomes. Emphasize hope for the future, and move the discussion toward steps for next time around.

I hope you guys enjoyed this piece; I know that I struggle quite a bit with giving and accepting praise, so this article was enlightening for me to think through. Follow me on Twitter @cryoshon and be sure to check out my Patreon page if you like the stuff I’m writing!

 

How to Improve Work-Stuff, Scientifically!

One of my favorite tasks to do when I’m at work is to find ways of optimizing workflows, actions, or processes that I’m regularly doing. If you do something multiple times per day or week, it’s worth doing it as well as possible, right? In my experience, most tasks or workflows are created thoughtfully, but then executed relatively automatically, and, over time, thoughtlessly. Sure, if you have a workflow that’s deeply detail oriented or requires a lot of conscious, brain-on-task time, you’re likely to be mentally active while you execute it, but actually thinking about the efficiency of the process itself may not  be on your mind.

Sometimes I set aside time for process improvements, but usually I fit it into a block of time that I don’t have slated for anything else. Depending on what kind of work you do and what kind of improvements you’re seeking to make, making a change to your process may require a lot of paperwork. If making changes to your workflow or process will require a lot of paperwork, it’s still worth at least investigating whether you can make a change, but the bar for what criterion you use to select your change will probably differ, as it makes more sense to fix a ton of small changes or radically re-haul the process entirely.

When optimizing work, take care to not disrupt old dogmas willy nilly. I propose investigating your workflows scientifically, and determining which optimizations to make scientifically as well. This means that the technique for optimizing workflows I’ll be discussing in this article will be suitable for some kinds of work, but not others. Additionally, my scientific way of investigating beneficial changes may not operate properly for every type of work.

How do you select a process for scientific optimization? The following points are a good guide to seeing whether or not your process can be improved scientifically:

  1. Measurable outcomes and rigorous metrics. In order to think about optimizations scientifically, we need to be able to quantify the pieces we’re talking about. A manufacturing process that produces 15 yellow cubes in 1 hour is an easy candidate for scientific optimization because changes to the process will alter the number, color, or time it takes to produce the cubes. A painting technique that is used to produce impressionistic portraits is not a good choice for optimization, though with some time invested into making qualitative rubrics it may be possible.
  2. Empowerment to experiment. Everyone has bosses, and not everyone’s boss is going to be keen on experimentation with company assets. Having supportive co-workers and bosses is essential to experimenting with process improvements. Bosses may be scared away from the scientific optimization process because it’s resource intensive. Others may be scared due to their own insecurity with scientific pursuits, which tend to be perceived as complicated. Aside from clearance to experiment generally, some processes at work may be open for reinterpretation, whereas others will be sacred and untouchable.
  3. Non-catastrophic failure. Experimenting with the workflow that props up an entire business is sometimes necessary, but should be avoided if it can’t be done safely. The last thing an employee should do is destroy an already-functioning process by attempting to improve it. For some workflows, safe experimentation isn’t possible without the potential for massive fallout if things go wrong. In these cases, making a smaller model to play with typically isn’t possible. I suggest you avoid playing around with systems that will have bad consequences if they fail or have null results.
  4. Controls and Variables. If you’re really going to be conducting a scientific evaluation of your workflows, you need to have the ability to create controls and variables for your investigation. This means that it must be possible to keep the majority of your process the same while changing small pieces individually. Additionally, you need to have data for the way that the process behaves under normal, non-experimental conditions. Most workflows have a paperwork component of some kind, so this is a great place to start looking for control data that you can compare your experimental data with after you’ve run your experiment.

Now you know how to evaluate a process for scientific optimization, so let’s dive right into the meat of how to actually run an experiment once you’ve picked a process to change.

  1. First, if you haven’t already, decide what your variables will be. Remember, you should only be investigating one state of one variable for each trial in the experiment. The variables you pick are up to you, but keep in mind that the items you pick as variables are the items which will end up being improved by beneficial changes to the process that you discover after the experiment is over.
  2. Next, decide your controls. The controls are the most important part of getting usable data from the experiment. I suggest having a negative control (the process as executed before the improvements proposed by the experiment). If you want to get fancy and your process permits it, I’d also add in a null control (a control designed to terminate the process from moving forward) and a positive control (a control designed to test your ability to detect positive results and gather data), but these aren’t strictly necessary.
  3. Once you have decided your controls and variables, it’s time to write up an experimental protocol. How will you be isolating your controls from your experimental group? How will you be altering your variables and setting up your controls? How will you be changing your variables? What will your output look like? How will you be measuring the results of the experiment? How will data be presented in raw form? This is the hardest step and also the most risky step, scientifically. Ensure that your protocol is as close to the normal, pre-experiment way of doing things as possible in order to minimize variability. An experiment is only as strong as its protocol!
  4. Run your protocol and gather data. Each run of the protocol counts as a trial in your experiment. Take care to follow your protocol to the letter, and record data about how the output of the process changes based off of the state of the variables. Don’t worry about analyzing data yet, just try to stick to the protocol and pay attention to your controls and variables. It’s best to minimize variability by running protocols at the same time of day.
  5. Repeat step 4 as many times as needed. Gather data until you feel as though you have enough trials to make a decision. If you want to be super scientific, do some statistics and determine the sample size you need in order to make a good decision, but for most workplace experiments, this level of application isn’t necessary.
  6. Analyze the data gathered in steps 4-5. Which changes to which variables created the most beneficial changes to your originally stated metrics? Were there any consequences to optimization?
  7. Implement changes to your workflow. This should be quite easy, with data in hand. Be sure to argue for your changes using the data that you gathered scientifically, if necessary. If there’s no boss to convince, then enjoy the fruits of your labor immediately.
  8. Show off your good results! Be sure to keep a record of the way that your workflow was run beforehand, just in case. It also helps to maintain records of how your metrics were performing before your scientific optimizations, so that you can show off the positive differences you effected later on. If your results were negative, don’t sweat– most experiments have negative results. More experimentation might be useful, but know when it’s time to throw in the towel. There isn’t necessarily room to improve every single process, especially if it’s already been through the ringer a few times over the years.

Hopefully this guide was helpful to you; I know that I’ve more or less run this regimen on every workflow and process that I’ve touched throughout my professional life. The core concept is systematically tracking changes to variables. As long as you can keep track of what you’re changing, you can make a causative connection between your changes and the outcome.

If you liked this piece, follow me on Twitter @cryoshon and be sure to subscribe to the email list on the right!

How To Write Systematically in 11.5 bites

After a few years of working in biomedical research and a philosophy degree from college, I know a few things about writing and thinking systematically. Unfortunately, I see a lot of people stumbling in their writing when they try to create complex abstract or technical materials– writing is tough, and accurate, succinct, detailed, and logical writing is even harder.

To me, systematic writing is a method of writing which seeks to transmute the complex relationships between raw or parsed data into a coherent, readable narrative that can be effectively understood and analyzed by someone who is generally knowledgeable on the topic, but who didn’t gather or prepare the data. Systematic writing is part of a greater family of writing that includes scientific writing, technical writing, and financial writing, along with other types I probably haven’t even thought of.

While this definition may seem overly abstract, I’d like to point out that most of our received and sent communications are not systematic; a news anchor is not relaying systematically prepared information to the public, even though the reporters have gone through the trouble of parsing raw data (events that happened) into a narrative (what the anchor says). The quantity of technical detail and data referencing in a news report is slim, as news reports are designed for a very wide audience who have little previous context for the event that happened (the data). An email we send to a colleague referencing data or analysis is not necessarily systematic writing, as it’s entirely possible for a certain context to be inferred between two people; systematic writing provides its own context and content explicitly to the audience.

Systematic writing is typically intended for a small, already-savvy audience, and should only offer the minimal viable context. A reader with general knowledge on the topic of the piece should be able to acquaint himself with a systematically written piece in short order, but a layman should not, because establishing the amount of context required for a layman would involve a lot of background information which falls outside of the scope of a particular instance of systematic writing. We don’t want our systematic writing to sprawl, because systematic writing is intensely purposeful and detail-heavy writing, and lots of background information and tangents dilute the factual details we’re trying to communicate.

So, the title promises 11.5 bites describing the process of writing systematically, and without further ado here’s a primer on how to write and think systematically:

  1. Define your goal. What kind of narrative do you want to make, and what data are you planning on using? Who is going to read the report, and how much context will be required?
  2. Put on your white thinking hat.  To use the terminology of the fantastic thought guide Six Thinking Hats, the white thinking hat is purely unbiased and factual thinking used for establishing a common ground among readers. If you’re going to be writing a systematic document which refers to data, you need to make sure that you don’t take any liberties with the data without explicitly qualifying them as speculation or partially supported. No spin!
  3. Assemble your data. You can’t write systematically without having data. Ensure that your data is collated/parsed/charted in a non-deceptive and easy to understand way– the only person you’re trying to inform at this step is yourself, so it behooves you to be honest about the quality of your data and what knowledge we can actually extract in analysis. If there are computations or manipulations required of your data, now is the time to do them.
  4. Determine the limits of what your data can tell you. Soon, we’ll analyze our data, but first, we need to vaccinate ourselves against narrative mistakes. Though it seems simple, it’s easy to slip up and attribute facts to your data that aren’t actually there. Explicitly state the variables which your data depicts (sales, months). Remember that going forward, all of your statements should be in terms of the variables which you outline here. If you’re not talking about information within the purview the data that your variables describe, you’re not being systematic.
  5. Extract verbal information from your data.   Write down simple statements to these effects,  such as, “the data for November showed 42 sales.” If you computed averages or other values in your data assembly step, now is the time to introduce it as a simple phrase. If you expect that handling the data in this way will be confusing, document your process simply and clearly so that your audience will understand. Do not introduce any explanation at this point, merely state what the data say, and, if necessary, state how the data were processed. Remember not to speculate, the point of this step is to establish purely factual statements.
  6. Analyze your data at a basic level. Now that you have a series of simple statements depicting your data in an unbiased way, comparisons between data statements can begin. Are the sales from November higher than the sales from October? Write that comparison down if it’s relevant to your originally stated goal, and make sure to directly reference the values in your new synthesis statements. The point of this step is to explicitly state simple relationships of the data, independent of any narrative.
  7. Analyze your data deeply. Stay focused on your original goal during this step. What questions can your impartial data statements answer explicitly? Implicitly? What trends in your data are noteworthy? What points of data are outliers? Can you explain the outliers? In this step, writing more complex statements is necessary. “The sales data from November (42 sales) are higher than October (30 sales), following the upward trend of the fall season. These data tell us that the fall season is our strongest selling period, despite the high sales in December.” Don’t try to speculate or hypothesize about “why” yet, just tease out the more complex relationships in your data, and write them down in a clear way. As always, reference your data directly in order to build context for your audience and keep them on the same page. Don’t worry about over-analyzing at this point, we’ll prune our findings later.
  8.  Ask Why. Why did we see the data that we saw in our analysis? What are the general principles governing our data? Address each piece of relevant data with this question, and ensure to answer it briefly. The outliers that were previously identified need special attention at this point. Keep explanations of your data concise and factual, though remember that your explanations are not actually within your data set, so you should draw in outside proof to support your explanations if necessary. It’s okay to hypothesize if you don’t know exactly why certain data turned out the way that they did, but be sure to explicitly label speculation.
  9. Build a narrative using your data, analyses, and explanation. Consider your starting goal, and how to marshal the data, analyses, and explanations in order to accomplish that goal. Your narrative should proceed first with the data, then with a simple factual explanation of the data, then with a more complex analysis of the data, and finish off with an explanation of the data if it’s required. The narrative step of systematic writing is where you put all of the pieces together and put it into one attractive package for your audience. Don’t neglect graceful segways between different portions of the data set. The final product of this step can be considered a first draft of your systematic writing effort, and may take the form of a PowerPoint presentation, meeting agenda, technical report, or formal paper.
  10. Anticipate questions and comments from your audience. Look for areas in which your explanation, analysis, or data prompt a response, and plan accordingly. Questions regarding your narrative are typically the easiest to address by clarifying what you’ve already written explaining why your data appears the way it does. Questions regarding your analysis can get a bit technical depending on the audience, and so you should be prepared to refer back to the source data in your responses. Questions regarding the data itself  or the parsing of the data are the most difficult; typically, the outliers will be under the most scrutiny, and their data quality may be called into question. I find that it helps to get out in front of questions regarding outliers, addressing them to your audience before taking questions.
  11. Prune non-critical information. This is the step where most of the data-statements and analysis statements meet their demise. Which analyses, explanations, and narrative elements aren’t strictly serving your original goal? Remove extraneous information to create a hardened product. Ensure that the relevant context and core data analysis remains, and don’t build a misleading narrative by omitting contradictory relevant data.

The final half-step is, of course, crossing the t’s and dotting the i’s for your final draft– and make sure it’s perfect! A missed detail on something not mission-critical will still distract your audience from your data and analysis.

I hope that my readers have a better idea of how to write and perhaps think systematically after reading this piece. I think that many non-technical people struggle with systematic writing because of how data-centric it is; communicating in the style of referencing data and withholding speculation can be quite difficult for people accustomed to relating written concepts intuitively and emotionally.

If you have any questions, leave em’ in the comments and I’ll respond. I know that the 21st century will have the highest demand yet for systematic thinkers and writers, so I’m also considering forming a consultancy in order to help organizations with training their employees and executives to think and communicate in systematic ways, so expect more on topics like this in the future.

As always, follow me on twitter @cryoshon, re-post my articles to social media, and subscribe to the mailing list on the right!