A Retrospective on Nassim Taleb’s The Black Swan

Nassim Taleb’s seminal exploration of randomness and unknown-unknowns has continued to be relevant ten years after its debut. An absolute classic within the food for thought genre of nonfiction, The Black Swan dips into philosophy, economics, finance, epistemology, and empiricism. Written in an intellectual style with frequent references to other thinkers and artists, Taleb frequently regales the reader with his sharp wit and excerpts from his experiences on Wall St. If you fancy yourself as a thinker who likes to be challenged by counterintuitive ideas, The Black Swan is the right book to pick up.

Within The Black Swan, Taleb’s primary task is to explain what he calls “black swans,” which can be described briefly as unforeseen events which cause extreme behaviors in their native context. Taleb explains how the Great Financial Crisis was a result of a complex of black swan events and lays the groundwork for his strategy to mitigate their effects.

In the course of discussing how to mitigate the damage caused by financial and economic Black Swans, Taleb teaches the reader about a variety of logical fallacies. Of particular interest is the narrative fallacy, which Taleb claims is a natural human mechanism for creating links between data points and extrapolating trend lines incorrectly. The core message of The Black Swan is outlined as “past behavior does not predict future events.”

Much of Taleb’s writing demands that the reader engages in critical thinking. While an excellent book, The Black Swan is not a technical paper and economists, scientists, or finance professionals who seek a mathematical investigation into Taleb’s ideas will be disappointed. Thankfully, Taleb’s academic publishing bibliography is quite extensive, so curious readers can follow up with the empirical evidence which informs his views if they so desire.

Perhaps the most impactful nonfiction book of the 2000s, The Black Swan is a critical read for those seeking to extend their library of thoughtware. By introducing some different cognitive and financial ideas and explaining them fully, Taleb imbues the reader with a different perspective on events. If just a taste of Taleb’s philosophy isn’t enough, readers can follow up with subsequent books within his Incerto series to learn more.

Advertisement

How To Read A Book is a Must Read

“How To Read A Book” sounds like it’s a book intended for elementary school students, but that couldn’t’ be farther from the truth. Mortimer Adler’s 1940 nonfiction lesson on how to read effectively should be read by everyone before they pick up another book. Using beautifully concise yet characteristically 1940s-era language, Adler’s book is a primer about how to absorb written information. If you’re interested in growing smarter by reading a book, look no further.

As Adler says early on, “Books are the way that we learn from absent teachers.” For How to Read a Book, the reader quickly understands that the teacher is Adler, and the book itself is, in fact, more of a class than a simple library of facts to plow through. Adler’s professorial tone guides the reader through an exploration of what he calls the “levels of reading”—basic semantic understanding, basic interpretation, and finally, critical thinking. During his discussion of each of the levels of reading, Adler explains how to identify opportunities for the reader to improve their skills at that level.

By reading through How to Read a Book, the reader will pick up good reading habits and boost their critical thinking. Asking questions of your reading material like “what is the author trying to accomplish with this piece?” and “does the author succeed in what they were trying to do?” becomes second nature. As an bonus, Adler provides a bibliography of challenging books to read to improve each level of reading at the end of How to Read a Book. Though all of the books that are mentioned are from earlier than 1940, the breadth and sophistication of Adler’s reading list is quite impressive and contains books that will challenge the reader no matter how competent they are at reading.

You may want to go back to your collection and re-read some of your favorites after you’re equipped with new reading skills. With How to Read a Book behind you, it’s likely that you’ll find new perspectives on your favorites, as well as points of improvement. Once you’ve learned how to read a book, your skill will only increase with time.

How to Be A Good or Bad Interviewer

The job interview process has been written about extensively, and some people even receive specialized training from their jobs on how to properly conduct an interview. Everyone has ideas about the best way to provide interviews, and large companies tend to have specific motifs or methods of interviewing which can take on an undeservedly mythical reputation. There’s a lot of debate over whether the interview is an effective way of selecting talent, but the median is settled: if you are looking for a job, you will have to have an interview. If you are looking to fill a job, you will have to interview someone. This article is an analysis of common mistakes that I’ve seen interviewers make. I can’t help but propose a few better ways of conducting interviews alongside my analysis.

The best parts of job interviewing are getting to find out interesting things about companies while meeting the potentially cool people who populate those companies. The worst parts of job interviewing are finding out exactly how bad people can be at providing an interview. I am not an expert on interviewing people by any means, nor am I an expert at being interviewed– I’ve made quite a few awful mistakes in both camps, to be sure. I think that I have a few good ideas on what not to do as an interviewer, though. The anecdotes that I’ll provide here are not embellished. I will state that this experience is mostly from interviewing for scientific jobs, and that it may be that the personality of scientists precludes them from being good interviewers, but I don’t believe that this is the case.

Everyone is behooved to be good at acing job interviews because jobs are desirable, but few are so inclined to be perfectionistic about the opposite half. Companies want good talent, but of course they can always provide a job offer to someone they like and at least have a chance of getting them to agree, even if they have performed their interviewing of the candidate poorly and the candidate performed poorly.

Because of this inherent inequality of the process, the process of interviewing candidates is typically far weaker than it should be in a few different dimensions. To clarify this concept: attending the job interview and presenting a good face to potential employers is always a high priority of the job seeker, but preparing to interview a candidate and interviewing a candidate properly is very rarely a high priority for the people providing the interview. This mainstream habit of interviewing carelessness shows like a deep facial scar. The consequence of low-prioritization of interviewer preparation is sloppiness in execution and wasted time for all parties.

First, in all interviews that I have ever been on either side of, there will be at least one person who has not read the resume or given any premeditation about the candidate. Do not be this person, because this person has little to contribute to the investigation into whether the candidate is suitable. Pre-reading the candidate’s resume is a must if the aim of the interview is to determine whether the person is qualified technically and qualified socially.  The purpose of the job interview is not to spend time checking whether the candidate can recapitulate their resume without forgetting their own accomplishments but rather to assess if the candidate will improve a team’s capability to execute work. This fact seems self evident, yet I have been interviewed by several unrelated people who explicitly stated that they would see whether what I was saying was the same as what was reflected on my resume.

Aside from pre-reading the candidate’s resume, interviewers should also pre-think about the candidate. Practically no interviewers I have interacted with have attended to pre-thought about the candidate in any meaningful way. Writing a job description or giving the candidate’s resume a once-over does not count as pre-thinking. If you want to find the perfect person for a position, it is a disservice to your company not to prioritize premeditation about the candidate. Without premeditation, there can be no intelligent questioning of the interviewee. Is the person’s previous experience going to give them unique insights on the job they are hoping to fill? Is this candidate going to be socially successful at this position? Set time aside to write down these questions when there is nothing else competing for your attention.

Frank consideration of whether the person will fit in with the others on the team or not should be broached ruthlessly at this early step. Social conformity is a strong force which applies to people, and an inability to fit in can cause disruption among less flexible teams. To be clear, I think that heterogeneous teams have many advantages, but I also think that most interviewers are largely engaged in an exercise of finding the roughly qualified candidate that conforms most unindependently to the already-established majority. Biases about what kind of person the candidate is are going to the warp judgment of the interviewer no matter what, so it’s better to air them out explicitly such that they may be compensated for or investigated further when the candidate comes in. The objective here is not to find things to dislike about the candidate, but rather identify where the biases of the interviewer may interfere with collecting good data from the candidate when they arrive.

Remember that this critical step is rarely as simple as it seems. What kind of positive job-related things does the interviewer think about themselves? These positive self-thoughts will surely be used as a hidden rubric to asses the candidate, unfortunately. The interviewer identifying with the candidate is one of the strongest guarantors of a job offer.  The other takeaway here is that once the candidate comes in for the interview, be sure to explicitly note points of personal and professional identification between the interviewer and the candidate! Identifying with the candidate is great for the candidate’s prospects of getting the job, but it may not be the correct choice for the team to have to accommodate a new person who isn’t qualified.

Consider doubts about the candidate based on the information available, then write down questions to ask the candidate which will help to address those doubts– being tactful and canny at this step is an absolute must, so if there’s any doubt at being able to execute such questioning gracefully, defer to someone else who is more skilled. Is the candidate too young or old to fit in with the team, or are there concerns about the candidate’s maturity? Is the candidate visibly of any kind of grouping of people which isn’t the majority? Is the candidate going to rock the boat when stability is desired? It’s better to clarify why the candidate may not be socially qualified rather than to hem and haw without explicit criterion.

Winging it simply will not provide the best possible results here, because really the interviewer is interviewing their own thoughts on the candidate who is still unseen. Honesty regarding the team’s tolerance for difference is critical. To be clear, I do not think that the heavily conformity-based social vetting of candidates is good or desirable whatsoever. In fact, I think the subconscious drive toward a similar person rather than a different one is a detrimental habit of humans that results in fragile and boring social monocultures. I am merely trying to describe the process by which candidates are evaluated in reality whether or not the interviewers realize it or not. The social qualification of the candidate is probably the largest single factor in deciding whether the candidate gets the job or not, so it’s important to pay attention rather than let it fall unspoken. Interviewing a candidate is a full but small project that lives within the larger project of finding the right person for the open position.

We’ve reached our conclusion about things to do during to the period before the candidate arrives. But what about once the candidate is sitting in the interview room? In situations where there are multiple interviewers, successive interviewers nearly always duplicate the efforts of previous interviewers. They ask the same questions, get the same answers, and perhaps have a couple of different follow ups– but largely they are wasting everyone’s time by treading and re-treading the same ground.

Have a chat with the team before interviewing the candidate and discuss who is going to ask what questions. The questions should be specific to the candidate and resulting from the individual premeditation that the members of the interviewing team performed before the meeting and before interviewing the candidate. The same concerns may crop up in different candidates, which is fine. Examine popular trends of concern, and figure out how to inquire about them. Assign the most difficult or probing questions to the most socially skilled teammate. If there’s no clear winner in terms of social skill, reconsider whether it’s going to be feasible to ask the candidate gracefully.

Plan to be on time, because the candidate did their best to be on time. In my experience, interviewers are habitually late, sometimes by as much as thirty minutes. This problem results from not prioritizing interviewing as a task, wastes everyone’s time, and is entirely avoidable. Additionally, make sure that your interviewing time is uninterrupted. An interviewer that is distracted by answering phone calls or emails is not an interviewer who is reaping as much information as possible from the candidate. If there is something more pressing than interviewing the candidate during the time which was set aside by everyone to interview them, reschedule. Interviewing is an effort and attention intensive task, and can’t simply be “fit in” or “made to work” if there are other things going on at the same time.  

The interviewers should have the candidate’s resume in hand, along with a list of questions. When possible the questions should be woven into a conversational framework rather than in an interrogation-style format. Conversational questioning keeps the candidate out of interview mode slightly more, though it’s not going to be possible or desirable to jolt the candidate into a more informal mode because of the stress involved in being interviewed. Remember that the goal is to ask the candidate the questions that will help you to determine whether they are socially and technically qualified for the job. The facade of the candidate doesn’t matter, provided that you can assess the aforementioned qualifications.

Don’t waste everyone’s time with procedural, legal, or “necessary” but informationally unfruitful questions! Leave the routine stuff to HR and instead prioritize getting the answers to questions that are specific to evaluating this candidate in particular. HR isn’t going to have to live with having this person on their team, but they will likely be concerned about logistical stuff, so let them do their job and you can do yours more efficiently. If there’s no HR to speak of, a phone screen before the interview is the time for any banalities. To reiterate: focus on the substantial questions during the interview, and ensure that procedural stuff or paperwork doesn’t eat up valuable time when the candidate is actually in front of you.

If there are doubts about a candidate’s technical abilities or experience, have a quick way of testing in hand and be sure to notify the candidate that they will be tested beforehand. Once again, do not wing it. Remember that the candidate’s resume got them to the interview, so there’s no point in re-hashing the contents of the resume unless there’s a specific question that prompts the candidate to do something other than summarize what they’ve already written down for you. I highly suggest that questions directed toward the candidate are designed to shed light on the things which are not detailed in the resume or cover letter. The thought process and demeanor of the candidate are the two most important of these items.

Assessing the experience or thought process of the candidate can frequently be done by posing a simple “if X, then what is your choice for Y?” style question.  In this vein, consider that personal questions aren’t relevant except to assess the social qualifications of the candidate. Therefore, questions regarding the way that the candidate deals with coworkers are fair game. I highly suggest making questions toward the candidate as realistic as possible rather than abstract; abstract questions tend to have abstract answers that may not provide actionable information whereas real creativity involves manipulation of the particulars of the situation.

Aside from asking fruitful questions, the interviewer should take care with the statements which they direct toward the candidate. I will take this opportunity to explain a common and especially frustrating mistake that I have experienced interviewers making. As is self evident, the interview is not the time to question whether the candidate is suitable to bring in for an interview. To discuss this matter with the candidate during the interview is a misstep and is time that could be better spent trying to understand the candidate’s place in the team more.

To this end, it is counterproductive and unprofessional to tell the candidate that they are not technically or socially qualified for the position they are interviewing during the interview! The same goes for interviewer statements which explicitly or implicitly dismiss the value of the candidate. Interviews are rife with this sort of unstrategic and unfocused foul-play. This has happened to me a number of times, and I have witnessed it as a co-interviewer several times as well.

A red flag for a terrible interviewer is that they tell the candidate or try to make the candidate admit lack of qualifications or experience. Mid-level managers seem to be the most susceptible to making this mistake, and mid-career employees the least. It is entirely possible to find the limit of a candidate’s knowledge in a way that does not involve explicitly putting them down.  Voice these concerns to other interviewers before the candidate is invited in. If your company considers minimization of the candidate’s accomplishments as a standard posturing tactic designed to produce lower salary requests, consider leaving.

Aside from being demeaning, the tactic of putting down the candidate during the interview is frequently used by insecure interviewers who aren’t fit to be performing the task of evaluating candidates. There is no greater purpose served by intentionally posturing to the candidate they they are not valuable and are unwanted! Time spent lording over how ill-fit the candidate is for the position is wasted time that could be better spent elsewhere.

Don’t play mind games with the candidate– it’s immature, misguided, and ineffective. Such efforts are nearly always transparent and constitute an incompetent approach to interviewing based off of the false premise that candidates misrepresent their ability to do work to the interviewers, and so the interviewer must throw the candidate off their guard in order to ascertain the truth about the candidate.This line of thinking dictates that the “true” personality or disposition of the candidate is the target of information gathering during the interview. The habits and realized output of a person while they are in the mode of working are the real target of inquiry in an interview, so don’t get distracted by other phenomena which require digging but don’t offer a concrete return.

Typically, the purpose of these mind games is to get beyond the candidate’s presentable facade in an attempt to evaluate their “true” disposition or personality. This goal is misguided because the goal of an employee is not to have a “true” disposition that is in accordance with what their employer wants, but rather to have an artificial disposition that is in accordance with what their employer wants. We call this artificial disposition “professionalism“, but really it is another term for workplace conformity. I will note that professionalism is a trait that is frequently (but not always) desirable because it implies smooth functioning of an employee within the workplace. The mask of professionalism is a useful one, and all workers understand more or less the idea of how to wear it. A worker’s “true” or hidden personality is unrelated to their ability to cooperate with a team and perform work, if the deeper personality even exists in the individual at all. Conformity keeps the unshown personality obedient and unseen in the workplace, so it isn’t worth trying to investigate it anyway.

After the candidate has left, it’s time for a debrief with the team. Did the candidate seem like they’d be able to fit in with the team socially? If not, could the team grow together with the candidate? Did the candidate pass the relevant technical questions? Is the candidate going to outshine anyone in the team and cause jealousy? Did anyone have any fresh concerns about the candidate, or were any old concerns left unresolved despite efforts to do so? It’s important to get everyone’s perspectives on these questions. Report back on the answers to the questions that were agreed upon beforehand. If everyone did their part, there shouldn’t be much duplicated effort, but there should be a lot of new information to process.

Not all perspectives are equal, and not all interviewers are socially adept enough to pick up subtle cues from the candidate. Conversely, some interviewers will ignore even strong social cues indicated a good fit if their biases interfere. Interviewers have to remember that their compatriots likely had different experiences with the candidate– if they didn’t, effort was wasted and work was duplicated.

Is the candidate worth calling in for another interview, or perhaps worth a job offer right away? What kind of social posturing did the candidate seem to be doing during each interaction? What was their body language like when they were answering the most critical inquiries? Pay particular attention to the differences in the way that the candidate acted around different interviewers. This will inform the interviewers potentially where some of the candidate’s habits lie, and allow analysis of whether those habits will conform with the group’s.

If the interviewing process is really a priority, the interviewers will write down the answers to the above questions and compare them. How you process the results of this comparison is up to you, but if you don’t do the process, you’re not getting the most information out of interviewing that you could. If you take one concept away from this piece, it should be that teams have to make their interviewing efforts a priority in order to avoid duplicating questions, wasting time with posturing, and properly assess social and technical qualifications of the candidate.

If you liked this piece, follow me on Twitter @cryoshon and check out my Patreon page! I’ve been sick the past week (as well as involved in an exciting new opportunity) so I haven’t been writing as much, but I should be over my cold by Monday and back to regular output.

 

Why the Sharing Economy is Awful

Continuing with my thinking on late capitalism has brought me to consider the idea of the “sharing economy“. Many people seem to intuitively understand the gist of the sharing economy– people use information technology in order to facilitate other people’s renting of their stuff. Immediately, there is something strange: “sharing” does not mean “renting” in any other context except in the term “sharing economy”. The sharing economy is the renting economy; no ownership is actually shared, nor is any use actually “shared”, except in exchange for money.

If anything, the sharing economy refers to the mass choice of struggling workers to rent out the combination of their labor time and their expensive stuff to information technology companies. The “sharing” with the the end-users is the least relevant part of the story because the end users are actually just consumers finding their preferred product. Consumers are not participants in the particular economic theme of “sharing”, as they share nothing whatsoever, and instead buy the product as they desire it.

Typically, the sharing economy doesn’t provide a totally novel product to consumers, but rather a more convenient product than the traditional competition for the same product. The consumers for the product being “shared” existed before the sharing economy came along, so the demand was already there too. The consumers are finding the most efficient path for their money to turn into the product they want– a path that information technology companies have provided for them by creating an app which allows for mass utilization of capital that they do not own, using workers they do not hire.

In a time of weak economic demand, the incentive to generate revenue in is high as ever. There is strong pressure to keep costs down (precluding large capital purchases or development of brand new products) and cut unprofitable programs in order to keep revenue as strong as possible despite weaker sales. This poses a problem: how can revenue be generated reliably when demand is weak? To answer this question, we have to step back and examine how revenue is made under normal circumstances.

Revenue is produced by workers utilizing capital to provide something of value. Capital may be thought of abstractly as large quantities of money that can be transformed into physical objects which are used to produce more money, or it can be thought of as the objects that produce money themselves. Traditionally, capital might be a piece of factory equipment, and the owners of capital are the business owners. Capital may depreciate in value as it is utilized to produce revenue. Eventually, the capital may need to be revitalized or replaced.

In the traditional model, normal workers don’t own the capital that they utilize to produce revenue. The worker is paid a fraction of the revenue of the company– most of the revenue of any given company is used to maintain its capital and its workforce. It is the responsibility of the owner of the capital to provide wages to the worker who utilizes said capital to produce revenue. What remains after  maintenance of capital and wages is called profit. The profit may be used to purchase more capital, put in the bank, or paid out to workers or owners. The key takeaway here is that workers traditionally do not have any financial responsibility toward the capital which they utilize. The role of the worker is to utilize the capital in order to collect wages.

The difference between companies renting capital in the sharing economy and traditional companies producing the same good is critical. The traditional competition is likely to be burdened by upkeep costs in ways that sharing economy correlates are not– after all, traditional companies have to own and maintain the capital themselves in addition to retaining workers. Sharing economy companies typically find ways to use contractors instead of full time workers, reducing their operating costs by providing fewer benefits. The utilization of worker capital to produce revenue is quite an interesting development when paired with the rise of “contractor” style employment arrangements.

The most visible pillars of the sharing economy are AirBnB and Uber. I am not trying to suggest that these companies are “bad” for the economy. I use both of these services, and enjoy the products that they offer. I am suggesting that the sharing economy is detrimental to workers who are effectively forced to pony up their own capital before being allowed to participate in what amount to low wage, unskilled labor style jobs. What isn’t commonly understood is that the sharing economy is economically exploitative by allowing people to create revenue from their personal capital.

The sharing economy turns the traditional capital-and-revenue equation on its head. Instead of capital being owned by a company and utilizing workers to gain revenue from that capital, a company merely rents capital owned by the worker as part of the worker’s wages, offloading the up-front cost of capital and discharging the costs of capital maintenance to the worker. Revenues no longer flow toward the owner of the capital, but rather to the renter of the capital. After that, things function normally: workers are paid their static amount of the revenue, which is low despite bringing capital to the table.

The effect of the sharing economy is a part-time injection of previously untapped capital into the economic ecosystem. Common items which most people have (a spare room or car, for instance) can now be used as revenue-producing capital by their owners, who are likely strapped for revenue due to poor economic conditions. Thus the sharing economy allows workers short on revenue to rent out their capital alongside their labor, allowing them to have labor opportunities that they wouldn’t have otherwise– a very strong economic incentive. Instead of requiring capital sunk on credentials or time used to beef up a resume, workers in the sharing economy are merely required to lay a chunk of their capital on the table in order to start working. In some ways, this is good, as it allows people to work for wages that would otherwise not be competitive enough to get a job.

This reversal of the normal order certainly has many other benefits: the freedom afforded to those who choose to work as Uber or Lyft drivers is much higher than the median worker who must adhere to standardized hours and habits. The same could be said for the person who puts their spare room up on AirBnB. The income afforded to the workers of the sharing economy certainly keeps many people afloat– but broadly speaking, the sharing economy is an unequal economy because neither risk nor profits are shared.

Workers accept high risk to their capital from constant heavy utilization, and are not rewarded for it. Capital depreciation is likely, and is not compensated for by wages. Total losses of capital are not compensated for whatsoever. Instead, workers put a lot on the line in exchange for average wages whose rate does not increase despite large profits. Should the worker lose their capital, they are out in the cold.

Before the sharing economy existed, the capital of the lower classes was unreachable and reserved solely for personal use; in this sense, the sharing economy is a huge economic leap forward, as it increases the ability for wealth to flow, which broadly speaking, generates opportunity. Unfortunately, within the paradigm of the sharing economy wealth largely flows upward rather than circulates. It is unlikely that a worker participating in the sharing economy will make enough money to afford another capital purchase should their revenue-producing capital be destroyed by the process.

There is a case to be made for the sharing economy to be considered a system for transferring wealth from the lower economic classes to the owning class. The capital of the lower classes is used as a certificate signalling employment-worthiness, then is used to generate revenue for those who can afford to rent it out in mass to create products for consumers. The profits made are not returned to those who own the capital, but rather to those who own the information technology company which rents the capital. The owners of capital are in this situation bled at every step of the process and subject to high amounts of instability.

What’s a consumer to do? To start, do research and find out which sharing economy product provider is the most ethical. Paying workers better wages for ponying up their own capital is more ethical than the alternative. Finding out which companies bring on workers to be actual employees rather than contractors is also a good idea. Profit-sharing for workers and accommodations for worker capital loss and depreciation are items which are yet unheard of, and so are to be considered the icing on the cake.

If you liked this article, check me out on Twitter @cryoshon and also hit up my Patreon page!

 

 

How to be a Good Adviser by Playing Pretend

Upon leaving a job about a year ago, at my going away party one of my friends and coworkers asked me if I had any advice to pass on to the team. At the time, I stated that my advice was not to give generalized advice without a specific issue in mind, because it wouldn’t contain actionable information that would improve the receiver’s experience. With the benefit of time, I can see that there are a few more wrinkles to discuss regarding advising.

Most of my early experience with advising was from my school and university years. Later, I’d go on to advise my friends on their business ventures by asking questions then following up with more questions. I’ll disclose a caveat to my thinking on advising: I’ve never been so keen on asking for advice because of all the bad advice I’ve received over the years. My negative advising experiences have given me a lot of ideas to chew on, though.

There is a distinction between offering a piece of advice, and being an actual adviser, and for this piece I’ll touch on both, with an emphasis on the latter.  I’d like to revisit that sentiment and delve a little bit deeper. Before I do, a brief discussion of what advice is and what advisers are is in order.

Generally speaking, people are familiar with the concept of taking advice from others regarding areas outside their expertise. Additionally, people are usually comfortable with the idea of providing advice to others when prompted– and, frequently to the frustration of others, when they are not prompted. Advice is the transfer of topical information or data by a third party to a person looking for a good outcome. A large volume of our communications are offering, requesting, or clarifying advice.

The concept of advice as information will be familiar to almost everyone. Frequently, the topical information that is elicited by a request for advice is anecdotal. If the adviser is careless or not directed, the anecdotal information offered to the advised may merely be tangentially related or actually unrelated to the issue at hand. Not everyone pays close attention to their outgoing advice if they have no skin in the game. The main problem with anecdotal evidence is that it refers to specific instances of a trend rather than the rules which govern that trend. Yet, most advice is anecdotal, perhaps as an artifact of humanity’s sensitivity to personal stories rather than hard data or universal laws.

Informally, it’s nearly impossible to escape anecdotal evidence when requesting or giving advice. Frequently, an adviser will forgo telling the actual anecdote, and skip right to the advice that they have distilled from their own experience, leaving the advised with an even more incomplete view. This has predictable consequences when paired with people’s tendency to do as others tell them. Using an incomplete group of anecdotes culled from the experience of others and processed from an uncomfortable position of ignorance, decisions are made based on the emotions of others rather than clear-headed analysis.

I am sure nearly everyone has received completely heartfelt yet completely detrimental advice in their time. If we are lucky, we avoid the consequences of receiving bad advice and catch the mistakes of our advisers in time to reject their thoughts and prevent internalization. If we are unlucky, we follow the path to nowhere and are upset with the results.

Part of maturity is understanding that while others are capable of delivering bad advice, we too are likely to give bad advice if given the chance. We don’t have to commit to delivering advice if we don’t feel qualified, nor do we have to ask for advice or follow advice once given. Advice is just a perspective on an issue, and not all perspectives are equal.

Critically, good advice is specific and actionable rather than vague. If the best that an adviser can do is offer a general direction to follow up on, you’re outside the realm of their experience or outside the amount of effort they’re willing to invest in you. A typical red flag for bad advice is that it’s delivered quickly, sleepily, or nearly automatically.

Good advising is extremely effort intensive! Rid yourself of advisers that don’t respect you enough to apply themselves fully. In my experience, the prototypical awful adviser is coerced into the role rather than choosing it themselves. University advisers are the worst example of being forced into advising. Identify which advisers are around only because they’re required to be, and then avoid them and their bad advice.

So, how are we going to limit our ability to deliver bad advice and maximize our delivery of good advice? Should we simply stonewall all requests for advice and refuse to ask others for help? I don’t think that this is the answer, because advice is one of the principle ways in which we can share the experiences of others and make use of experiences that we have not had ourselves. Sharing experiences is a critical component to being human, and it’s unlikely that we could stop even if we tried.

The way that I propose to avoid delivering bad advice and to actually deliver good advice is to use a mind-trick on ourselves. The mind-trick that I am referring to is playing pretend. First, I’ll need to build a mental image of the thing I want to pretend to be– the best possible adviser– then when it’s time to give advice, I’ll be able to pretend to be the embodiment of the image and put myself in the correct mindset for delivering good advice. After I’ve built the barebones of this mental image, taking it out for a test run with a hypothetical request for advice will help to fill in the details and also provide a template for how to think when it’s time to deliver real advice.

What are the properties of this mental image of the ideal adviser? I think that the perfect adviser is a professorial figure, and so adopting an academic tone and patient, receptive train of thought is necessary. Advising someone else shouldn’t be careless or haphazard, so the perfect adviser should mentally state an intention to provide their undivided and complete attention to the pupil for the duration of the session. The aim is to achieve a meditative focus on the present where the power of the adviser’s knowledge and experience can act without interference. The adviser is never emotional. Value judgments are deferred or unstated; the details and the pupil are at the forefront.

In order to advise properly, this professorial type will know the limits of his knowledge as well as his strong points, and will weight his statements to the pupil in accordance with how much he really knows, making sure to be precise with his language and to qualify his statements. Reaching the limits of the adviser’s knowledge isn’t something to be ashamed of, as it’s an interesting challenge for the ideal adviser to chew on.

The aim of the perfect adviser is to consider the particular details of the situation of his pupil, relate them to the universal trends which the adviser has uncovered with conscious effort, and then use a combination of the universal trends and the particulars of the pupil to offer a prescription for action. The mental image of the adviser will explicitly recite the universal trends to himself as he ponders the direction to indicate to his pupil. The conversation between the pupil and the adviser is marked by long pauses as the adviser takes the time to call critical trends and details into his working memory so that the pupil may make use of them. Advising is a conversation that can’t be rushed, because the adviser might forget to make an important connection of communicate in a precise way. The best advising has no time limit.

With each stanza of conversation, the adviser will find that his idea of the prescription in progress is stalled by a facet of the pupil’s situation which hasn’t been discussed. The adviser asks deeply focused questions which will unblock the progress of making his advice draft. The draft will have to be completely reworked in light of information gathered from the pupil. Once the draft is completed, the adviser will ask validating questions to see whether their draft is workable and realistic. Upon validation, the adviser will deliver the draft in a reassuring yet detached fashion.

I actually use this mental image when I’m called on to give advice, and I think it helps a lot. “Playing pretend” is just a convenient way of stepping into a foreign mindset without getting too self conscious. The important takeaway here is that the mindset of being a good adviser is very different from our normal range of thought because it is both clinical and creative. Clinical in the sense that facts and particulars are recognizable within a general framework, and creative in the sense that the solution to the clinically described problem probably doesn’t have a pre-established treatment.

Advising is a skill that can be learned and perfected, though it’s seldom prioritized. I think that prioritizing becoming a good adviser is absolutely essential if you think that giving advice is a core part of what you do. For the most part, “first do no harm” is a maxim that I wish more advisers practiced. If you liked this article, follow me on Twitter @cryoshon and check out my Patreon page! I’ll probably revisit this article when I have a bit more experience advising.

 

 

How to Ask A Good Scientific Question

One of the first tasks a scientist or curious person must undertake before experimentation is the formulation and positing of a scientific question. A scientific question is an extremely narrow question about reality which can be answered directly and specifically by data. Scientists pose scientific questions about obscure aspects of reality with the intent of discovering the answer via experimentation. After experimentation, the results of the experiment are compared with their most current explanation of reality, which will then be adjusted if necessary. In the laboratory, the original scientific question will likely take many complicated experiments and deep attention paid before it is answered.

For everyone else, the scientific question and experimental response is much more rudimentary: if you have ever wondered what the weather was like and then stepped outside to see for yourself, you have asked a very simple and broad scientific question and followed up with an equally simple experiment. Experiments render data, which is used to adjust the hypothesis, the working model that explains reality:  upon stepping outside, you may realize that it is cold, which supports your conception of the current time being winter.

Of course, a truly scientific hypothesis will seek to explain the ultimate cause as well as the proximate cause, but we’ll get into what that means later. For now, let’s investigate the concept of the hypothesis a little bit more so that we can understand the role of the scientific question a bit better.

Informally, we all carry countless hypotheses around in our head, though we don’t call them that and almost never consider them as models of reality that are informed by experimentation because of how natural the scientific process is to us. The hypotheses we are most familiar with are not even mentioned explicitly, though we rely on them deeply; our internal model of the world states that if we drop something, it will fall.

This simple hypothesis was likely formed early on in childhood, and was found to be correct over the course of many impromptu experiments where items were dropped and then were observed to fall. When our hypotheses are proven wrong by experimentation, our response is surprise, followed by a revision of the hypothesis in a way that accounts for the exception. Science at its most abstract is the continual revision of hypotheses after encountering surprising data points.

If we drop a tennis ball onto a hard floor, it will fall– then bounce back upward, gently violating our hypothesis that things will fall when dropped. Broadly speaking, our model of reality is still correct: the tennis ball does indeed fall when dropped, but we failed to account for the ball bouncing back upward, so we have to revise our hypothesis to explain the bounce. Once we have dropped the tennis ball a few more times to ensure that the first time was not a fluke, we may then adjust our hypothesis to include the possibility that some items, such as tennis balls, will bounce back up before falling again.

Of course, this hypothesis adjustment regarding tennis balls is quite naive, as it assigns the property of bouncing to certain objects rather than to a generalized phenomena of object motion and collision. The ultimate objective of the scientific process is to resolve vague hypotheses into perfect models of the world which can account for every possible state of affairs.

Hypotheses are vague and broad when first formed. Violations of the broad statements allow for clarification of the hypothesis and add detail to the model. As experiments continue to fill in the details of the hypothesis, our knowledge of reality deepens. Once our understanding of reality reaches a high enough level, we can propose matured hypotheses that can actually predict the way that reality will behave under certain conditions– this is one of the holy grails of scientific inquiry. Importantly, a prediction about the state of reality is just another type of scientific question. There is a critical caveat which I have not yet discussed, however.

Hypotheses must be testable by experimentation in order to be scientific. We will also say that hypotheses must be falsifiable. If the hypothesis states that the tennis ball bounces because of magic, it is not scientific or scientifically useful because there is no conceivable experiment which will tell us that “magic” is not the cause. We cannot interrogate more detail out of the concept of “magic” because it is immutable and mysterious by default.

Rather than filling in holes in our understanding of why tennis balls bounce, introducing the concept of magic as an explanation merely forces us to re-state the original question, “how does a tennis ball bouncing work?” In other words, introducing the concept of “magic” does not help us to add details which explain the phenomena of tennis balls bouncing, and ends up returning us to a search for more details. In general, hypotheses are better served by only introducing new concepts or terminology when necessary to label the relation of previously established data points to each other. The same could be said for the coining of a new term.

Now that we are on the same page regarding the purpose of scientific questions– adding detail to hypotheses by testing their statements– we can get into the guts of actually posing them. It’s okay if the scientific question is broad at first, so long as increasing levels of understanding allow for more specific inquiry. The best way to practice asking a basic scientific question is to imagine a physical phenomenon that fascinates you, then ask how it works and why. Answering the scientific question “why” is usually performed by catching up with previously performed research. Answering “how” will likely involve the same, although it may encounter the limit of human knowledge and require new experimentation to know definitively. I am fascinated by my dog’s penchant for heavily shedding hair. Why does my dog shed so much hair, and how does she know when to shed?

There are actually a number of scientific questions here, and we must isolate them from each other and identify the most abstract question we have first. We look for the most abstract question first in order to give a sort of conceptual location for our inquiry; once we know what the largest headline of our topic is, we know where on the paper we can try to squint and resolve the fine print. In actual practice, finding the most abstract question directs us to the proper body of already performed research.

Our most abstract question will always start with “why”. Answering “why” will always require a more comprehensive understanding of general instances that govern the phenomena in question, whereas “what” or “how” typically refers to an understanding that is limited to a fewer instances. So, our most abstract question here is, “Why does my dog shed so much?”

A complete scientific explanation of why the dog sheds will include a subsection which describes how the dog knows when to shed. Generally speaking, asking “why” brings you to the larger  and more comprehensively established hypothesis, whereas asking “how” brings you to the more narrow, less detailed, and more mechanistic hypothesis. Answering new questions of “why” in a scientific fashion will require answering many questions of “how” and synthesizing the results. When our previously held understanding of why is completely up-ended by some new explanation of how, we call it a scientific revolution.

At this point in human history, for every question we can have about the physical world, there is already a general hypothesis which our scientific questions will fall under. This is why it is important to orient our more specific scientific questions of “how” properly; we don’t want to be looking for our answer in the wrong place. In this case, we can say that dogs shed in order to regulate their temperature.

Temperature regulation is an already established general hypothesis which falls under the even more general hypothesis of homeostasis. So, when we ask how does the dog know when to shed, we understand that whatever the mechanistic details may be, the result of the sum of these details will be homeostasis of the dog via regulated temperature.

Understanding the integration between scientific whys and hows is a core concept in asking a good scientific question. Now that we have clarified the general “why” by catching up with previously established research, let’s think about our question of “how” for a moment. What level of detail are we looking for? Do we want to know about the hair shedding of dogs at the molecular level, the population level, or something in between? Once we decide, we should clarify our question accordingly to ensure that we conduct the proper experiment or look for the proper information.

When we clarify our scientific question, we need to phrase it in a way such that the information we are asking for is specific. A good way of doing this is simply rephrasing the question to ask for detailed information. Instead of asking, “how does the dog know when to shed”, ask, “what is the mechanism that causes dogs to shed at some times and not others.”

Asking for the mechanism means that you are asking for a detailed factual account. Indicating that you are interested in the aspect of the mechanism that makes dogs shed at some times but not other times clarifies the exact aspect of the mechanism of shedding that you are interested in. Asking “what is” can be the more precise way of asking “how.”

The question of the mechanism of shedding timing would be resolved even further into even more specific questions of sub-mechanisms if we were in the laboratory. Typically, scientific questions beget more scientific questions as details are uncovered by experiments which attempt to answer the original question.

As it turns out, we know from previous research that dog shedding periods are regulated by day length, which influences melatonin levels, which influences the hair growth cycle. Keen observers will note that there are many unstated scientific questions which filled in the details where I simplified using the word “influences”.

Now that you have an example of how to work through a proper scientific question from hypothesis to request for details, try it out for yourself. Asking a chain of scientific questions and researching the answers is one of the best ways to develop a sense of wonder for the complexity of our universe!

I hope you enjoyed this article, I’ve wanted to get these thoughts onto paper for quite a long time, and I assume I’ll revisit various portions of this piece later on because of how critical it is. If you want more content like this, check out my Twitter @cryoshon and my Patreon!

How to Become a Smarty Pants

There’s been a small amount of interest that I’ve seen in a few communities regarding building status as an “intellectual” in the colloquial sense, and I think it’s probably more correct to say that people would rather be perceived as smart than as dumb, which is completely fair.

This article could also be called “How to Look and Sound Like an Intellectual” although frankly that implies a scope that is much larger than anything I could discuss. So, we have a lighthearted article which purports to transform regular schlubs into smarty pants, if not genuinely smart people. If you already fashion yourself as a smarty pants, read on– I know you’re already into the idea of growing your capacities further. Hopefully my prescription won’t be too harsh for any given person to follow if they desire.

While it seems a bit backward to me to desire a socially assigned label rather than the concrete skills which cause people to give that label to others, building a curriculum  for being a smarty pants seems like an interesting challenge to me, so I’ll give it a shot. I hope that this will be a practice guide on how to not only seem smarter, but actually to think smarter and maybe even behave smarter. The general idea I’m going to hammer out here is that becoming an intellectual is merely a constant habit of stashing knowledge and cognitive tools. The contents of the stash are subject to compound interest as bridges between concepts are built and strengthened over time.

In many ways, I think that being a smarty pants is related with being a well rounded person in general. The primary difference between being seen as an intellectual and seen as a well rounded person is one of expertise. The expertise of an intellectual is building “intellect”, which is an amorphously defined faculty which lends itself to making witty rejoinders and authoritative-sounding commentary. There’s more to being a smarty pants than puns and convincing rhetoric, though: smarty pants everywhere have been utilizing obscure namedropping since the dawn of society. Playtime is over now, though. How the heck does a person become a smarty pants instead of merely pretending to be like one?

Being a smarty pants is a habit of prioritizing acquisition of deep knowledge over superficial knowledge. Were you taught the theory of evolution in school? Recall the image that is most commonly associated with evolution. You probably picked the monkey gradually becoming a walking man, which is wrong. The superficial knowledge of the idea that humans and monkeys had a common ancestor is extremely common, but the deeper knowledge is that taxonomically, evolution behaves like a branched tree rather than a series of points along a line.

See how I just scored some smarty pants points by taking a superficial idea and clarifying it with detailed evidence which is more accurate? That’s a core smarty pants technique, and it’s only possible if you have deep knowledge in the first place. Another smarty pants technique is anticipating misconceptions before they occur, and clearing them up preemptively. How should you acquire deep knowledge, though?

Stop watching “the news”, TV, movies, cat videos, and “shows”. Harsh, I know– but this step is completely necessary until a person has rooted themselves in being a smarty pants. This media is intended to prime you for certain behaviors and thoughts, occupy your time outside of work, and provide a sensation of entertainment rather than enriching your mind. The more you consume these media, the less your mind is your own, and the more your mind is merely a collection of tropes placed there by someone else. Choosing to be a smarty pants is the same as choosing isolation from the noise of the irrelevant.

For the most part, these media are sources of superficial information and never deep information. You can’t be a smarty pants if you’re only loaded with Big Bang Theory quotes, because being a smarty pants means knowing things that other people don’t know and synthesizing concepts together in ways that other people wouldn’t or couldn’t. There is zero mental effort involved in consuming the vast majority of these media, even the purported “educational” shows and documentaries which are largely vapid. Seeing a documentary is only the barest introduction to a topic. Intellectuals read, then think, then repeat.

I guess I’ve said some pretty radical things here, but try going back and viewing some media in the light I’ve cast it in. There are exceptions to the rule here, of course: The Wire, The Deer Hunter, American Beauty, or an exceptionally crafted documentary. The idea is that these deeper works are mentally participatory rather than passively consumed; the depth and emotionality that the best audiovisual media convey can be considered fine art, and smarty pants love fine art. During your smarty pants training, I would still avoid all of the above, though. Speaking of your smart pants training…

Stop reading “the news”, gossip of any kind, Facebook, Twitter, clickbait articles, and magazines.  These things are all motherlodes of superficial information. As Murakami said truthfully, “If you only read the books that everyone else is reading, you can only think what everyone else is thinking.” This concept is absolutely critical because an intellectual is defined by depth of thought, quality of thought, and originality of thought relative to the normal expectation. Loading up on intellectual junk food is useless for this purpose, so get rid of it and you will instantly get smarter.

Noticed how I namedropped Murakami there? That’s worth smarty pants points because it’s conceptual tie in that is directly relevant to the point I’m trying to make, and expresses the idea more elegantly than I could on my own. Don’t just namedrop obscure people wildly, as you’ll look more like a jackass than a smarty pants, though the line is blurry at times. Being a fresh-faced smarty pants frequently involves making the people around you feel inadequate, but it shouldn’t when practiced properly!

The purpose of self-enrichment is for self-benefit, and should not be used for putting down others. Frequently, knowledge may be controversial or unwelcome, so begin to be sensitive to that when conversing with others. Life isn’t a contest for who can show off the most factual knowledge– but if it were, a good smarty pants would be in the running for the winner, and that’s your new goal.

Pick an area that will be your expertise. Pick something you will find interesting and can learn about without laboring against your attention capacity. This should be distinct from a hobby. Which topic you address is up to you, but I’d highly suggest approaching whatever topic you choose in a multi-disciplinary manner. If you’re interested in psychology, be sure to devour some sociology. If you’re interested in biology, grab some chemistry and physics. If you’re a philosopher, try literature or history. Your expertise in your chosen field will mature over time, and eventually you should branch out to gain expertise in a new field.

The idea here is that the process of picking an area of expertise is useful to the smarty pants. By evaluating different areas, the smarty pants will get a feel for what they’re interested in, what’s current, and what’s boring. The most intellectually fruitful areas of expertise have a lot of cross-applicability to other areas and concepts, an established corpus of literature, and a lot of superficial everyday-life correlates. Suitable examples of areas of expertise are “the history of science” or “modern political thought”. An unsuitable example of an area of expertise would be “dogs” or “engine design”. Unsuitable areas of expertise aren’t applicable to outside concepts and don’t confer new paradigms of thought.

Start reading books, in-depth articles, and scholarly summaries on topics which you want to develop your expertise in. A smarty pants has a hungry mind and needs a constant supply of brain food, which is synonymous with deep knowledge. Reading books and developing deep knowledge is never finished for the aspiring smarty pants. Plow through book after book; ensure that the most referenced scholarly works or industrial texts are well-understood. Understand who the major thinkers and groups are within the area of expertise, and be able to explain their thoughts and relationships. Quality is the priority over quantity of information, however.

Merely stopping the flow of bad information in and starting a flow of good information isn’t enough to be a real smarty pants, though it’s a good start. In order to really change ourselves into smarty pants, we must change our way of engagement with the world. As referenced before regarding media consumption, a smarty pants must interrogate the world with an active mind rather than a passive mind. What do I mean here?

A passive mind watches the world and receives its thoughts as passed from on high. Passive minds do not chew on incoming information before internalizing it– we recognize this the most pungently when a relative makes regrettable political statements culled directly from Fox News. An active mind is constantly questioning validity, making comparisons to previous concepts, and rejecting faulty logic. An active mind references the current topic with its corpus of knowledge, finding inconsistencies.

Creating an active mind is an extremely large task that I’ll probably break into in another full article, but suffice it to say that the smarty pants must get into the habit of chewing on incoming information and assessing its value before swallowing. Learning how to think/write systematically and disagree intelligently are probably both skills that a smarty pants can make use of.

Speaking of relatives, a smarty pants needs to have good company in order to grow. Ditch your dumb old friends and get some folks who are definitely smarter than you– they exist, no matter what you may think of yourself. You don’t really need to ditch your old friends, but you really do need to get the brain juices flowing by social contact with other smarty pants. There are many groups on the internet which purport to be the home of  smart people, but my personal choice is HackerNews.

It’ll hurt to feel dumb all the time, but remember that feeling dumb means that you are being exposed to difficult new concepts or information. Feeling dumb is the ideal situation f0r an aspiring smarty pants because feeling dumb means that you are feeling pressure that will promote growing to meet the demands of your environment. Every time you feel dumb, catch the feeling, resolve the feeling to an explicit insecurity, then gather and process information until that insecurity is squashed by understanding. Like I said before, this step is unpleasant, but nobody said being a smarty pants was easy.

This concludes my primer on how to be a smarty pants. I’ll be writing more on this topic, though a bit more seriously and more specifically. I’d really like to publish a general “how to think critically” article in the near future, and of course critical thinking is a core smarty pants skill. I have a reading list for the most general and abstract “smarty pants education” that I’ll be publishing relatively soon as well. Until then, try practicing the bold points here.

Be sure to follow me on Twitter @cryoshon and check out my Patreon page!

How to Survive Late Capitalism As a Worker

My recent response to Paul Graham’s article on economic inequality allowed me to express a few of my relatively mainstream economic ideas to a wider audience. I think that the discussion about inequality is worth fleshing out a bit more, but I’m not so interested in getting into the much-rehashed capitalism versus socialism theoretical debate. Instead,  I think a practical article is due: how should people who make their livelihood by selling labor survive and thrive in our current era of soaring inequality and reduced labor-power? How should people avoid realizing downward social mobility? The short answer to these questions is that people must embrace identification and proper utilization of personal resources.

Before we continue addressing the title’s question, let’s define some terms that may not be common. First, “late capitalism” is a term which refers to a turbulent phase of the economic system of capitalism. I’m not going to define what capitalism itself is here, but late capitalism is characterized by matured globalization, soaring inequality and attendant opulence/poverty, reduced economic growth, weakened social safety net, mass consumption, and reduced boundaries between political and economic systems. Departing from the definition that Wikipedia offers, I do not believe that there is an overtly Marxist revolution in our near future which would bring a definitive conclusion to the economic system of capitalism. I do believe that workers must advocate for themselves in order to receive compensation for the resources which they expend at work. The term “late capitalism” is still fruitful because it is a convenient way of describing the broad strokes of an unstable period of time.

The confluence of these trends results in increasing poverty and a threat to individual standard of living. The purpose of this article is to shed new light on how an individual can navigate this period of time conscientiously rather than as shark bait– the unfortunate fate of the underclass. The underclass is filled with people who experienced downward social mobility, and who now have trouble surviving.

The next terms to define are “survive” and “worker”. To survive late capitalism means having sufficient personal resources to ensure that physical and emotional needs are met for an individual, as well as their dependents. Critically attendant to this is that survival in late capitalism constitutes the continued ability to rent personal resources in exchange for money. For our purposes here, having no money and no ability to get money is equal to death.

Money can be thought of as living inside all resources like an ore within rock. The resources in question are too many to list, but I’ll get into some of them, namely time/energy, physical health, skill set, disposition, fluid cash, and social network. Taken together, the sum of personal resources that an individual can bring to bear can be considered as “capital” which a corporation may rent in order to make a profit by utilizing the individual’s resources. A worker is a person who rents their personal resources as their primary way of gaining money. The most important step to surviving late capitalism is understanding what your personal resources are and ensuring that no single personal or social resource is depleted beyond renewal as a result of renting them out for money. 

If a resource is depleted beyond renewal, opportunities to sell your labor which would tax that resource are now cut off. Resources depleted beyond renewal typically result in realized social mobility downward or abject poverty. For many people, there appear to be few choices but to continue depleting the few resources they have until they are barren. If resources are on track to be completely depleted with no way of renewal, we can call it a death spiral because it eventually results in economic death.

Why would renting personal resources out for money at a job result in these resources being rendered barren? Given the way that I have identified my terms, there is inherent tension between the concepts of surviving and being a worker. Working is renting out personal resources in exchange for money, and there are no guarantees that these resources are being rented out and expended at the correct rate or monetary return.

At a naive level, we can say that a worker who rents their physical health resource out too aggressively may end up sick or injured, and thus unable to work until they have recovered. If the worker’s physical health is completely depleted, they may become disabled or dead, precluding their use of renting that resource in the future– an economic deathblow resulting accidentally or from mismanagement. This may seem a bit flip, but it’s a real concern for manual laborers.

Physical health is a personal resource which is finite, but renewable. The same could be said for a worker’s mental energy resource. All workers rent out some of their physical health resources as part of the package that employers demand. Sitting at a desk hunched over a computer all day is detrimental to your health, as we all know– yet it’s part of many jobs.

“Working harder” by expending more physical effort may result in injury, but it’s seldom worth extra money directly. A dilemma occurs when the worker unwittingly or unwillingly expends more of a given resource than they intended given the terms of employment; it is rarely possible to go back and re-negotiate a new fee based off of personal resources expended, though a corporation is sure to do exactly that if they overrun their budget for a contract. So in many situations, workers cannot retroactively correct imbalances in resource use, assuming that the imbalances are noticed at all. I will state that this situation is the progenitor of many injustices, and there is little economic or political pressure to create a remedy.

An additional difficulty occurs when we consider exactly which personal resources are going to be expended for a given job. Every job will deplete a worker’s physical energy/health, mental energy, and time resources. Most jobs will also deplete some of a worker’s money indirectly in the form of transportation. It is very easy to lose track of our individual resources and how much we are taxing them, as we often realize when we look up from our work and see that it is 9:00 PM instead of the informally agreed upon stopping time hours earlier. Thankfully, our time resources are always renewable, though we may have plans to utilize them in certain ways on any given day.

In order to prevent personal resources from being depleted beyond renewal, explicit knowledge of the total quantity of each resource and the rate at which that resource is used must be understood in depth by the worker. Making a rational deal with an employer regarding use of personal resources is impossible without explicit knowledge of what those resources are and how much they will be expended, yet most people have only vague ideas of what is in their stable, and what is in their work contract. Furthermore, employers always have concrete knowledge of their company’s financial resources, but never have an itemized list of employees personal resource expenditures; this inequality favors the employers massively, as it means they cannot be held accountable for breaches of contract resulting from too many personal resources being used. Having this kind of knowledge explicitly stated will benefit employers massively as well, allowing them to understand inefficiencies of individual resource use and provide crutches as needed to make their workers happier.

Employers hide and thrive in the ambiguity of personal resource use; workers are eaten alive by uncalculated overages. Surviving late capitalism is possible by rectifying this inequality via the surgical application of knowledge directly where it is unwelcome. A vague plea for economic fairness falls on sewn-shut ears, but an itemized invoice for resources disbursed is undeniable. Though corporate culture is not yet receptive to such brazen empiricism, they will grudgingly adjust if the issue is forced by their employees– and it must be forced vigorously.

It is my assumption that the majority of worker resources including money (for housing near work, etc) are expended in large quantities by their work, with the remaining resources being expended at home or “wasted” by disuse. A wasted resource is a resource which isn’t utilized by the individual, whether to rent out for money or to be expended on other things. The most easily wasted resource is time, though physical energy and social resources are also typically not fully utilized. We will forget the topic of wasting the money resource, as it is a very large jar of snakes that has been discussed many other times.

For most people, expending the majority of their resources on work is a way of life that is accepted as necessary and virtuous. The difficulty with the “virtuous” component of this point of view is that it promotes a peculiar type of rounding-up fallacy where the worker believes that it is just for their employment to consume the majority of their resources, so a little bit more sacrifice in the name of employment is also just.

There is even a pejorative name for this tendency: the Protestant work ethic. The tendency to commit more resources to work than the minimum explicitly agreed upon in the employment contract is a form of wasting resources, as the resources are not expended for personal purposes nor do they directly result in more money for the individual. The defense of an individual against wasting resources as a result of work is to explicitly agree on the amount of resources that will be expended in the course of work with the employer. 

As uncomfortable as it may be to force the issue of limits, not agreeing in writing to ironclad boundaries always leads to a worker’s personal resources being wasted. For some jobs, overtime is a form of agreement which offers compensation for resources which would otherwise be considered by the worker to be wasted. For most jobs, the there is no such agreement where in fact there should be. This norm is harmful, and must change.

When on average more workers commit more resources to work than the explicitly agreed upon amount, employers grow to expect that level of commitment. This is how a society eventually arrives at exploitation when starting from acceptable premises. More perversely, workers grow to expect their level of resource usage to be higher than the explicitly agreed upon amount even including the previous over-commitment, creating a death spiral of sorts. We as a society are currently in the midst of this death spiral, and only by simultaneous individual action can it be stopped.

How frequently do you spend more time at work than is required? How quickly does being at work physically tire you? Mentally? Does it cost a lot to get to and from work? What does that work out to weekly? Are you zombie-like after work, or still perky? Are your personal relationships being impeded by work? Is your skill set being bolstered at work, or is it decaying from disuse or overly narrow use?

Brutal honesty is necessary here. Work is not the only thing which taxes personal resources, though– family, recreation, religion, and friends count too. All activities that an individual performs consume their personal resources to some extent. Luckily, many activities are beneficial and can refill depleted resources.

As an exercise, write it all down in a table which details the resource, your estimated total capacity for this resource, current amount of this resource, whether the resource is renewable or not, and how roughly how much of each resource you use when you are doing activities required for work, home, or play. Are you being compensated for the totality of usage of these resources, or just a few? Is the current rate of resource usage and renewal going to result in this resource being rendered barren if given enough time? Which resources are being wasted at work? At home? Did you sign up for this, or something else? Aside from economic issues, this is a great way of finding out which activities of your life are beneficial and which aren’t great.

Identify potential death spirals, and use your resources to stop them as quickly as possible. Economic death spirals can literally kill you if you’re relying heavily on your physical health as a resource. Which resources are being tapped to capacity and are in danger of being burned out? Is there a certain changeable life situation which you can see is going to lead you to ruin? Why are your resources being drawn on so hard? Would it be possible to trade expenditure of one resource for expenditure of another for a time being in order to let a heavily taxed resource recover a bit? For mental energy, people might consider a vacation as a way of vaccinating themselves against burnout by expending money and time. If your skill resources are stagnating from not being used, you can use time, mental energy, and money resources to take a class and stay sharp.

Frequently, social resources have to be called on in order to stop death spirals– don’t be shy, and ask for help well before it’s too late. Family and friends can frequently spare some of their resources in order to give a little slack. Social resources include government and state programs; make use of public resources as much as possible in order to free up your own resources. Aside from using public resources, use friends and coworkers as advocates; if everyone systematically quantifies their resource use and demands compensation and a reduction of wasted resources, there will be change.

Remember: economic gravity means that it’s much easier to fall than to rise. The fewer spare resources an individual has, the more likely they are to slip down the economic ladder, and the less likely they are to rise. If an individual is constantly heavily drawing on all of their resources in order to trade them for money, we can say that the individual is a wage slave, and is likely on the cusp of downward social mobility, though they have already likely experienced some in order to arrive at that point.

So, how should a person protect and increase the amount of resources they have, given that having personal resources is so critical to survival? A great boon is to have a job which increases your skill resources and social resources via learning new things and meeting new people. As skill resources and social resources grow, an individual’s value becomes more clear to potential employers, even if they haven’t fully tabulated all of the resources they’ll be using in the job.

There will be few people that suggest skill building and networking are not economically useful for an individual. Skill building should be a priority for anyone interested in surviving late capitalism; as employers demand more, you must have more to actually provide. Being in a habit of constantly building skills is being in a habit of constantly providing for your future. This habit will likely tax certain resources heavily until they compensate, so remember not to tap them out completely.

Just skill building isn’t enough to confer survival, though, as not all skills are economically equal. I would suggest a meta-thought here: an important skill is the ability to differentiate economically lucrative skills from merely economically sustaining skills. This isn’t as obvious as it sounds, and many people jump at what is easy to learn rather than what is profitable to learn. Learning how to operate an espresso machine provides a skill that may offer some financial sustenance, but it is not lucrative. Learning how to perform surgery is lucrative. A measurement of economic demand is frequently a good place to get started.

To summarize: a surviving individual’s response to the extreme economic pressure of late capitalism is to increase resource expenditure in themselves in order to to make par, frequently by building financially rewarding skills and social resources. Explicitly knowing what personal resources are and the rate at which they are expended during work is critical, as is a realistic work contract which recognizes the above. In the event of an inaccurate contract or set of circumstances which taxes a person’s resources too heavily, care must be taken to avoid death spirals.

I hope this article shed some fresh light on my personal strategy for surviving late capitalism. Given the bold points that I have bulleted, I worry that I have been a bit longwinded. Unfortunately, I already know that the ideas I put forth here aren’t going to help people who are trapped in the underclass, but maybe it’ll prevent some middle classed people from slipping down to there. I do not yet have a real solution for the general problem of “most people don’t have enough personal resources to flourish”. I’m not an authority on this topic by any means, and “the struggle” is far from over. I feel as though I will have a lot more to add on various aspects of this piece, so expect me to revisit it relatively soon.

Follow me on Twitter @cryoshon and be sure to check out my Patreon page!

 

 

 

A Response to Paul Graham’s Article on Income Inequality

While perusing HackerNews today, I encountered this article and this comment thread by Paul Graham (PG for short), founder of Ycombinator. I think that a lengthy response is in order. I originally intended this response to be in my HN comment, but it was too long. If you’re not interested in debating income inequality, this response is not for you. I’ll be quoting quite liberally from PG’s essay in this response.

So, let’s get started. I think PG really missed the mark with his assessment of the impact of economic inequality and instead substituted a real world struggle against economic conditions with a rosy economic model which starts from the premise that the rich need the ability to get richer in order to have a successful society.

To quote Graham, mafioso of the startup incubators: “I’m interested in the topic because I am a manufacturer of economic inequality.”

Well, not quite. The throughput of successful startup folks is never going to be enough to make a dent in the economy’s general state of inequality. If anything, YC offers social mobility insurance; the potential for social mobility from the middle classes to the lower-upper class without the potential for a slip from the middle classes to the lower classes in the event of failure.

“I’ve also written essays encouraging people to increase economic inequality and giving them detailed instructions showing how.”

Perhaps PG misunderstand the terms here? Has he been instructing his charges to pay lower wages and fewer benefits as their profits scale upward so as to add more to their own purses? A disconnect between rising productivity and worker income is one of the largest factors for economic inequality in the US.

“The most common mistake people make about economic inequality is to treat it as a single phenomenon. The most naive version of which is the one based on the pie fallacy: that the rich get rich by taking money from the poor.”

Well, “taking” is a bit biased, but broadly speaking, it’s true that the poor must buy or rent what the rich are offering in order to survive. This means that the poor are economically at the whim of the rich unless they choose to grow their own food and live pastorally, which isn’t desirable. People pay rent if they’re poor, and collect rent if they’re rich. The poor sell their labor, whereas the rich buy labor in order to utilize their capital, which the poor have none of. These are traits of capitalism rather than anything to get upset about. People get upset when the rich use their oversized political influence to get laws passed to their benefit; over time, the rich make more money due to their ability to manipulate the political system.

“…those at the top are grabbing an increasing fraction of the nation’s income—so much of a larger share that what’s left over for the rest is diminished….”

Check out these charts… the data is much-discussed because they are unimpeachable. Ignoring the reality of data is a mistake economists often make, which can explain some of their more incorrect predictions.

“In the real world you can create wealth as well as taking it from others. A woodworker creates wealth. He makes a chair, and you willingly give him money in return for it. A high-frequency trader does not. He makes a dollar only when someone on the other end of a trade loses a dollar.

If the rich people in a society got that way by taking wealth from the poor, then you have the degenerate case of economic inequality where the cause of poverty is the same as the cause of wealth. But instances of inequality don’t have to be instances of the degenerate case. If one woodworker makes 5 chairs and another makes none, the second woodworker will have less money, but not because anyone took anything from him.”

The woodworker works in a wood shop, not alone. The owner of the wood shop has decided that if 5 chairs are sold, it takes 2 chairs worth of money to recoup the costs of making the chair. With three chairs worth of money remaining, he takes two and three fourths chairs for himself and distributes the remaining amount to the worker who created the chair.

The woodworker created the wealth by using the owner’s capital, and so the owner of the capital gets the vast majority of the wealth generated, even though he didn’t actually make the chairs himself. Is the owner “taking” from his employee? No, the employee has merely realized that one fourth of one chair’s income is the standard amount that a woodworker can get from working in a shop owned by someone else, and happened to choose this particular shop to work in. “Taking” is the wrong word; “greed” is the proper word. The proportion of revenue derived from capital that is returned to workers selling their labor is far too low. The woodworkers can’t simultaneously pay off their woodworking school loans, apartment rent, and care for their children on the wages they’re offered.

“Except in the degenerate case, economic inequality can’t be described by a ratio or even a curve. In the general case it consists of multiple ways people become poor, and multiple ways people become rich. Which means to understand economic inequality in a country, you have to go find individual people who are poor or rich and figure out why.”

Actually, economists have been describing it in the terms of ratios and curves for a long time. Piketty’s account is the most current. The “ways” of becoming poor or rich misses the point entirely. Upward social mobility is very low now, and downward social mobility is quite high. Outside “becoming” rich or poor, the standard of living for the rich has risen and the standard of living for everyone else has dropped. Becoming rich is an edge case which isn’t even worth talking about when there are far more people in danger of becoming poor. We have no obligation to stop someone from “becoming rich”– but we have a strong obligation to stop someone from becoming poor.

“If you want to understand change in economic inequality, you should ask what those people would have done when it was different. This is one way I know the rich aren’t all getting richer simply from some sinister new system for transferring wealth to them from everyone else. When you use the would-have method with startup founders, you find what most would have done back in 1960, when economic inequality was lower, was to join big companies or become professors. Before Mark Zuckerberg started Facebook, his default expectation was that he’d end up working at Microsoft. The reason he and most other startup founders are richer than they would have been in the mid 20th century is not because of some right turn the country took during the Reagan administration, but because progress in technology has made it much easier to start a new company that grows fast.”

Not even close. The richest hundred people have gotten wildly richer as a result of crony capitalism in which the richest are able to bend the political system to their will via overt bribery, creating unfair advantages for their ventures and endless loopholes for their personal wealth to avoid taxation. The ventures of the very rich are given unearned integration into political life, again making them a shoe in for special treatment.

Remember how the failing banks in the financial crisis were considered too big to fail, and were accommodated at the public’s expense? This kind of behavior insures the rich’s safety with the money culled from the poor. Information technology is a gold rush, and creates rich people by forging new vehicles of capital– generating wealth. The economics of a gold rush are quite clear, but PG forgets that the vast, vast majority of the workers in the economy are not participating in the gold rush, nor could they.

“And that group presents two problems for the hunter of economic inequality. One is that variation in productivity is accelerating. The rate at which individuals can create wealth depends on the technology available to them, and that grows polynomially. The other problem with creating wealth, as a source of inequality, is that it can expand to accommodate a lot of people.”

Productivity has been increasing for decades, and at one point in time, wages tracked productivity. The relationship between wages and productivity fell apart. This means that the business owners were benefiting from increased worker productivity, but the workers were not benefiting… another cause of economic inequality that can be attributed directly to the owners not allowing enough money to go to their workers. If productivity is accelerating, wages should be too. Rather than understanding workers as slaves that require a dole as they are presently, they must be considered as close partners in economic production.

“Most people who get rich tend to be fairly driven. Whatever their other flaws, laziness is usually not one of them. Suppose new policies make it hard to make a fortune in finance. Does it seem plausible that the people who currently go into finance to make their fortunes will continue to do so but be content to work for ordinary salaries? The reason they go into finance is not because they love finance but because they want to get rich. If the only way left to get rich is to start startups, they’ll start startups. They’ll do well at it too, because determination is the main factor in the success of a startup. [3] And while it would probably be a good thing for the world if people who wanted to get rich switched from playing zero-sum games to creating wealth, that would not only not eliminate economic inequality, but might even make it worse. In a zero-sum game there is at least a limit to the upside. Plus a lot of the new startups would create new technology that further accelerated variation in productivity.”

Once again: the current flap about economic inequality is not about people wanting to become rich, it is about people wanting to get by. Most people are not driven. Everyone wants to at least get by. You will not stop people from being driven to become rich by making it possible for everyone else to get by.

“So let’s be clear about that. Ending economic inequality would mean ending startups. Are you sure, hunters, that you want to shoot this particular animal? It would only mean you eliminated startups in your own country. Ambitious people already move halfway around the world to further their careers, and startups can operate from anywhere nowadays. So if you made it impossible to get rich by creating wealth in your country, the ambitious people in your country would just leave and do it somewhere else. Which would certainly get you a lower Gini coefficient, along with a lesson in being careful what you ask for. ”

No, it wouldn’t. There is lower and higher economic inequality in many places in the world, and many of those places have startups. There is nothing special about startups, and startups persist whether or not the society is extremely unequal. There are startups in Sweden. There are startups in China. There are startups in Nigeria. There are startups in Denmark. There is absolutely no reason to be prideful in the American startup phenomenon if it requires people living in poverty– I do not believe that it does require this, though.

“And while some of the growth in economic inequality we’ve seen since then has been due to bad behavior of various kinds, there has simultaneously been a huge increase in individuals’ ability to create wealth. Startups are almost entirely a product of this period. And even within the startup world, there has been a qualitative change in the last 10 years.”

Do not confuse the tech startup as a method for creating wealth that anyone can step into. Coding is a difficult skill that most people are not about to retrain into, even if it’s lucrative.

“Notice how novel it feels to think about that. The public conversation so far has been exclusively about the need to decrease economic inequality. We’ve barely given a thought to how to live with it.

I’m hopeful we’ll be able to. Brandeis was a product of the Gilded Age, and things have changed since then. It’s harder to hide wrongdoing now. And to get rich now you don’t have to buy politicians the way railroad or oil magnates did. [6] The great concentrations of wealth I see around me in Silicon Valley don’t seem to be destroying democracy.”

Living with economic inequality is uncomfortable for the majority of the population, but it is comfortable for the rich. The way to live with it is to defer having children, not get a graduate education, never own a home, have a shitty car, never eat out, don’t go on vacation, work two jobs, don’t ever get sick, don’t get married, never pay off student loans, never save for retirement or an emergency, and never get arrested.

Seems pretty shitty, right? Seems like something people would want to change for the better, right? I will also state that all of the above items vastly detract from a person’s free-mental and physical energy, which results in less innovation and ultimately less creation of the “startups” that income inequality is supposed to support. PG even acknowledges this, but doesn’t seem to understand the visceral impact of income inequality.

To crystallize everything, let’s hop backward to a time when there was less inequality and compare lifestyles. In yesteryear, families requires only one breadwinner, and debt beyond a mortgage was unknown. People had a car per person, and college education. If you were sick, you could pay for a doctor. People had savings. People married young, and bought starter homes… then moved into larger homes. People had children. People could care for their aging parents without moving back in. People had pensions, retirement funds, and plans to use both. All of this wealth derived directly from workers selling their labor for money. Starting new businesses happened frequently because there was a robust net to fall on in case of failure. Workers banded together to protect their share. Wages tracked productivity.

Now: none of the above, and families often consist of two breadwinners (& no children) with a hearty amount of debt, nothing owned, and few savings. The family unit itself may even be weaker because of less shared ownership. Wages haven’t tracked productivity for decades, so wages haven’t risen since the previous story was normal. We’ve lost all of that ground: not just some of it, all of it, and more. We’re back to the 1920s– wage slaves with few rights and no political ability to change things.

Is this what PG thinks is okay?

If you liked this response, follow me on Twitter @cryoshon.

Also, be sure to check out my Patreon page!

How to Decide What to Prioritize At Work

Deciding what to prioritize during work is a critical skill that every employee in any job must understand in order to excel. In my previous post,  regarding praise and criticism at work, I discussed how the coherence of priorities among members of a team results in opportunities for criticism or praise. Given that an individual’s prioritization of work tasks has direct results which other coworkers and managers evaluate, picking the correct order of priorities has high social stakes as well as obvious economic stakes.

Let’s define the term priority in the context of work. A priority is a task or collection of closely related sequential tasks which consume a worker’s time and energy resources to complete. Most workers have numerous priorities which must be completed. Priorities frequently exist in a list of importance, whether this list is explicitly stated or not– we’ll talk more about the “importance” of priorities later on. Certain priorities have dependencies on other priorities that must be completed before they can be performed; dependent priorities are often completed by a coworker.

The completion of priorities consumes the majority of an industrious worker’s time and energy spent at work, and businesses hire new employees in order to complete more priorities. The role of the manager is to assign priorities to the members of the team, and ensure that the members of the team are able to complete their priorities. Frequently, meetings are held in order to discuss which priorities are “more important” than others. Let’s unpack the concept of more important priorities versus less important priorities.

A priority is considered to be “high priority” or “more important” than other priorities if the completion of that priority is time sensitive or is a dependency of another person or group’s priority. We will define a time sensitive priority as a priority that is initiated close to its deadline. Most priorities do not fit this bill, and so there may not be a clear reason to complete one before the other.With that being said, missing or confusing the reason for arranging priorities in a certain order is a perpetual stumbling block for most employees.

Typically, employees and managers alike have only a case-by-case way of reasoning about which priorities should be completed first. I’ll try to clear this up by offering a concrete thought system for ordering priorities.

Figuring out a process to correctly determine the list of priorities requires thinking from a holistic perspective. The efficiency goal of a team is to maximize the team’s completion of high priorities; when the highest priority is completed, the next highest priority takes its place. Teams are made of individuals, and the priorities of the team are divided among the individuals in the team. For the maximum priority completion capacity of an individual team member to be met, at minimum, their priority dependencies must be completed.

If a worker’s priority dependencies are not met, they may become unable to complete a priority and instead opt to complete a different priority. If a worker can’t start on any of their priorities as a result of the dependencies not being completed, the worker is said to be blocked. If a worker is blocked, the team is wasting that worker’s time and energy resources, so organizing priorities in a way which avoids blocking is critical to a smoothly functioning team.

While dependent priorities may seem to always take the highest place on the list, there is an inherent tension between dependent priorities, time sensitive priorities, and general time management. Is it higher priority to finish writing a report by its deadline, or to proofread a coworker’s finished report so that they may start writing a new one? The answer is that it depends on how much time and other priorities you have, and also how much time and other priorities your coworker has.

A worker’s general purpose is to transmute their time and energy resources into completed priorities. Time at work is hopefully finite, and is marked by the explicit or implicit passing of deadlines. Most priorities have deadlines after which their completion will be considered discordant with expectations, likely resulting in criticism. Priority deadlines exist together simultaneously, and approach the present with equal speed regardless of the amount of resources invested. Having strong knowledge of time management and the time it takes a given worker to complete a given priority is critical.

A smart team will have a calendar with the deadlines for all of the team’s priorities as well as the individual team member priority deadlines. Having this information and mapping it out is the first step in a system for determining the most important priority.

  1. Organize all priorities onto a calendar with the deadline marked clearly. A non-calendar schedule is also fine. This organization scheme should be zoomed in to the minute, or zoomed out to the year as necessary. The more detailed and particle-sized the calendar is, the more effective it will be. A digital calendar or schedule shared by the entire team is the way to do this correctly, as a paper calendar would probably get full too quickly.
  2. Mark into the calendar or schedule who is going to devote their time to each priority. Be realistic about what each team member can do, but also recognize inertia: people in motion tend to stay in motion. An overloaded teammate making good headway is often a smarter choice for an even heavier load relative to a moderately loaded but frequently dependency stalled teammate. Time insensitive and non-dependent priorities can be left unassigned to remain as extra for whoever finishes their priorities first, but I don’t recommend it unless the team is exceptionally motivated to churn through work, which most are not. Unassigned priorities tend to fall through the cracks, so don’t set yourself up for failure by assuming someone will pick up the burden.
  3. Identify the amount of time it takes to perform each priority assuming that the priority’s dependencies are met. If you don’t have this data, I highly suggest gathering it. If you want to get fancy, you can also identify how much worker energy each priority expends per unit of time. Understanding which priorities are effort intensive can often lead to insights.
  4. Identify which priorities are dependent on the completion of other priorities. Write down exactly which other priorities need to be finished first in order to start work. Identify who is responsible for completing the dependencies, and identify if they are going to be dependent on further priorities being completed before they can unblock each other. Identify the points at which blocking is likely to happen.
  5. Identify which priorities are time sensitive. Time sensitive priorities are always close to their deadline; the affix of time sensitivity can be surreptitiously imposed by management, meaning that time sensitive priorities are not always known in advance. A priority could be time sensitive if it is a priority which is another worker’s dependency, and that worker will be stalled and unable to do any work if the priority is not finished by its deadline.
  6. Arrange priorities to eliminate blockages as much as possible. Ensure that all dependencies are completed to avoid stalling. Perfection may not be possible here. Pragmatic judgments about the harm of missing deadlines in order to maintain steady flow of priority completion will be required. Time sensitive priorities passed down from high may cause hiccups in steady flow, causing blockages– it’s best to leave some slack time between a priority’s minimum completion time and its deadline whenever possible.
  7. Assess blockages as they occur and determine whether a different ordering of priorities would prevent them. A no brainer: adjust based off of how the plan works in action. Sometimes multiple blockages can be alleviated with a single change.

I highly suggest that managers take this explicit methodology for priority ordering to heart. Conducting endless meetings to assign priorities and gather status updates on completion is one very popular (and very time-wasteful) method of ensuring the team’s priorities are ordered in the same way. Having a shared team system for picking the order of priorities reduces blockages, reduces workplace stress, and improves a team’s output.

When everyone is on the same page when it comes to priority completion, we call it good teamwork. A frequent sensation of good teamwork is the gratefulness of being handed a snack immediately before actually being hungry.

I hope you liked this article! I struggled a bit to clarify my thoughts during the final two items of the list, and I may revisit them shortly to edit. If you liked this piece, follow me on Twitter @cryoshon and if you’re feeling generous, check out my Patreon page to support me writing more articles like these!