A Retrospective on Nassim Taleb’s The Black Swan

Nassim Taleb’s seminal exploration of randomness and unknown-unknowns has continued to be relevant ten years after its debut. An absolute classic within the food for thought genre of nonfiction, The Black Swan dips into philosophy, economics, finance, epistemology, and empiricism. Written in an intellectual style with frequent references to other thinkers and artists, Taleb frequently regales the reader with his sharp wit and excerpts from his experiences on Wall St. If you fancy yourself as a thinker who likes to be challenged by counterintuitive ideas, The Black Swan is the right book to pick up.

Within The Black Swan, Taleb’s primary task is to explain what he calls “black swans,” which can be described briefly as unforeseen events which cause extreme behaviors in their native context. Taleb explains how the Great Financial Crisis was a result of a complex of black swan events and lays the groundwork for his strategy to mitigate their effects.

In the course of discussing how to mitigate the damage caused by financial and economic Black Swans, Taleb teaches the reader about a variety of logical fallacies. Of particular interest is the narrative fallacy, which Taleb claims is a natural human mechanism for creating links between data points and extrapolating trend lines incorrectly. The core message of The Black Swan is outlined as “past behavior does not predict future events.”

Much of Taleb’s writing demands that the reader engages in critical thinking. While an excellent book, The Black Swan is not a technical paper and economists, scientists, or finance professionals who seek a mathematical investigation into Taleb’s ideas will be disappointed. Thankfully, Taleb’s academic publishing bibliography is quite extensive, so curious readers can follow up with the empirical evidence which informs his views if they so desire.

Perhaps the most impactful nonfiction book of the 2000s, The Black Swan is a critical read for those seeking to extend their library of thoughtware. By introducing some different cognitive and financial ideas and explaining them fully, Taleb imbues the reader with a different perspective on events. If just a taste of Taleb’s philosophy isn’t enough, readers can follow up with subsequent books within his Incerto series to learn more.

Advertisements

How To Read A Book is a Must Read

“How To Read A Book” sounds like it’s a book intended for elementary school students, but that couldn’t’ be farther from the truth. Mortimer Adler’s 1940 nonfiction lesson on how to read effectively should be read by everyone before they pick up another book. Using beautifully concise yet characteristically 1940s-era language, Adler’s book is a primer about how to absorb written information. If you’re interested in growing smarter by reading a book, look no further.

As Adler says early on, “Books are the way that we learn from absent teachers.” For How to Read a Book, the reader quickly understands that the teacher is Adler, and the book itself is, in fact, more of a class than a simple library of facts to plow through. Adler’s professorial tone guides the reader through an exploration of what he calls the “levels of reading”—basic semantic understanding, basic interpretation, and finally, critical thinking. During his discussion of each of the levels of reading, Adler explains how to identify opportunities for the reader to improve their skills at that level.

By reading through How to Read a Book, the reader will pick up good reading habits and boost their critical thinking. Asking questions of your reading material like “what is the author trying to accomplish with this piece?” and “does the author succeed in what they were trying to do?” becomes second nature. As an bonus, Adler provides a bibliography of challenging books to read to improve each level of reading at the end of How to Read a Book. Though all of the books that are mentioned are from earlier than 1940, the breadth and sophistication of Adler’s reading list is quite impressive and contains books that will challenge the reader no matter how competent they are at reading.

You may want to go back to your collection and re-read some of your favorites after you’re equipped with new reading skills. With How to Read a Book behind you, it’s likely that you’ll find new perspectives on your favorites, as well as points of improvement. Once you’ve learned how to read a book, your skill will only increase with time.

How to Be A Good or Bad Interviewer

The job interview process has been written about extensively, and some people even receive specialized training from their jobs on how to properly conduct an interview. Everyone has ideas about the best way to provide interviews, and large companies tend to have specific motifs or methods of interviewing which can take on an undeservedly mythical reputation. There’s a lot of debate over whether the interview is an effective way of selecting talent, but the median is settled: if you are looking for a job, you will have to have an interview. If you are looking to fill a job, you will have to interview someone. This article is an analysis of common mistakes that I’ve seen interviewers make. I can’t help but propose a few better ways of conducting interviews alongside my analysis.

The best parts of job interviewing are getting to find out interesting things about companies while meeting the potentially cool people who populate those companies. The worst parts of job interviewing are finding out exactly how bad people can be at providing an interview. I am not an expert on interviewing people by any means, nor am I an expert at being interviewed– I’ve made quite a few awful mistakes in both camps, to be sure. I think that I have a few good ideas on what not to do as an interviewer, though. The anecdotes that I’ll provide here are not embellished. I will state that this experience is mostly from interviewing for scientific jobs, and that it may be that the personality of scientists precludes them from being good interviewers, but I don’t believe that this is the case.

Everyone is behooved to be good at acing job interviews because jobs are desirable, but few are so inclined to be perfectionistic about the opposite half. Companies want good talent, but of course they can always provide a job offer to someone they like and at least have a chance of getting them to agree, even if they have performed their interviewing of the candidate poorly and the candidate performed poorly.

Because of this inherent inequality of the process, the process of interviewing candidates is typically far weaker than it should be in a few different dimensions. To clarify this concept: attending the job interview and presenting a good face to potential employers is always a high priority of the job seeker, but preparing to interview a candidate and interviewing a candidate properly is very rarely a high priority for the people providing the interview. This mainstream habit of interviewing carelessness shows like a deep facial scar. The consequence of low-prioritization of interviewer preparation is sloppiness in execution and wasted time for all parties.

First, in all interviews that I have ever been on either side of, there will be at least one person who has not read the resume or given any premeditation about the candidate. Do not be this person, because this person has little to contribute to the investigation into whether the candidate is suitable. Pre-reading the candidate’s resume is a must if the aim of the interview is to determine whether the person is qualified technically and qualified socially.  The purpose of the job interview is not to spend time checking whether the candidate can recapitulate their resume without forgetting their own accomplishments but rather to assess if the candidate will improve a team’s capability to execute work. This fact seems self evident, yet I have been interviewed by several unrelated people who explicitly stated that they would see whether what I was saying was the same as what was reflected on my resume.

Aside from pre-reading the candidate’s resume, interviewers should also pre-think about the candidate. Practically no interviewers I have interacted with have attended to pre-thought about the candidate in any meaningful way. Writing a job description or giving the candidate’s resume a once-over does not count as pre-thinking. If you want to find the perfect person for a position, it is a disservice to your company not to prioritize premeditation about the candidate. Without premeditation, there can be no intelligent questioning of the interviewee. Is the person’s previous experience going to give them unique insights on the job they are hoping to fill? Is this candidate going to be socially successful at this position? Set time aside to write down these questions when there is nothing else competing for your attention.

Frank consideration of whether the person will fit in with the others on the team or not should be broached ruthlessly at this early step. Social conformity is a strong force which applies to people, and an inability to fit in can cause disruption among less flexible teams. To be clear, I think that heterogeneous teams have many advantages, but I also think that most interviewers are largely engaged in an exercise of finding the roughly qualified candidate that conforms most unindependently to the already-established majority. Biases about what kind of person the candidate is are going to the warp judgment of the interviewer no matter what, so it’s better to air them out explicitly such that they may be compensated for or investigated further when the candidate comes in. The objective here is not to find things to dislike about the candidate, but rather identify where the biases of the interviewer may interfere with collecting good data from the candidate when they arrive.

Remember that this critical step is rarely as simple as it seems. What kind of positive job-related things does the interviewer think about themselves? These positive self-thoughts will surely be used as a hidden rubric to asses the candidate, unfortunately. The interviewer identifying with the candidate is one of the strongest guarantors of a job offer.  The other takeaway here is that once the candidate comes in for the interview, be sure to explicitly note points of personal and professional identification between the interviewer and the candidate! Identifying with the candidate is great for the candidate’s prospects of getting the job, but it may not be the correct choice for the team to have to accommodate a new person who isn’t qualified.

Consider doubts about the candidate based on the information available, then write down questions to ask the candidate which will help to address those doubts– being tactful and canny at this step is an absolute must, so if there’s any doubt at being able to execute such questioning gracefully, defer to someone else who is more skilled. Is the candidate too young or old to fit in with the team, or are there concerns about the candidate’s maturity? Is the candidate visibly of any kind of grouping of people which isn’t the majority? Is the candidate going to rock the boat when stability is desired? It’s better to clarify why the candidate may not be socially qualified rather than to hem and haw without explicit criterion.

Winging it simply will not provide the best possible results here, because really the interviewer is interviewing their own thoughts on the candidate who is still unseen. Honesty regarding the team’s tolerance for difference is critical. To be clear, I do not think that the heavily conformity-based social vetting of candidates is good or desirable whatsoever. In fact, I think the subconscious drive toward a similar person rather than a different one is a detrimental habit of humans that results in fragile and boring social monocultures. I am merely trying to describe the process by which candidates are evaluated in reality whether or not the interviewers realize it or not. The social qualification of the candidate is probably the largest single factor in deciding whether the candidate gets the job or not, so it’s important to pay attention rather than let it fall unspoken. Interviewing a candidate is a full but small project that lives within the larger project of finding the right person for the open position.

We’ve reached our conclusion about things to do during to the period before the candidate arrives. But what about once the candidate is sitting in the interview room? In situations where there are multiple interviewers, successive interviewers nearly always duplicate the efforts of previous interviewers. They ask the same questions, get the same answers, and perhaps have a couple of different follow ups– but largely they are wasting everyone’s time by treading and re-treading the same ground.

Have a chat with the team before interviewing the candidate and discuss who is going to ask what questions. The questions should be specific to the candidate and resulting from the individual premeditation that the members of the interviewing team performed before the meeting and before interviewing the candidate. The same concerns may crop up in different candidates, which is fine. Examine popular trends of concern, and figure out how to inquire about them. Assign the most difficult or probing questions to the most socially skilled teammate. If there’s no clear winner in terms of social skill, reconsider whether it’s going to be feasible to ask the candidate gracefully.

Plan to be on time, because the candidate did their best to be on time. In my experience, interviewers are habitually late, sometimes by as much as thirty minutes. This problem results from not prioritizing interviewing as a task, wastes everyone’s time, and is entirely avoidable. Additionally, make sure that your interviewing time is uninterrupted. An interviewer that is distracted by answering phone calls or emails is not an interviewer who is reaping as much information as possible from the candidate. If there is something more pressing than interviewing the candidate during the time which was set aside by everyone to interview them, reschedule. Interviewing is an effort and attention intensive task, and can’t simply be “fit in” or “made to work” if there are other things going on at the same time.  

The interviewers should have the candidate’s resume in hand, along with a list of questions. When possible the questions should be woven into a conversational framework rather than in an interrogation-style format. Conversational questioning keeps the candidate out of interview mode slightly more, though it’s not going to be possible or desirable to jolt the candidate into a more informal mode because of the stress involved in being interviewed. Remember that the goal is to ask the candidate the questions that will help you to determine whether they are socially and technically qualified for the job. The facade of the candidate doesn’t matter, provided that you can assess the aforementioned qualifications.

Don’t waste everyone’s time with procedural, legal, or “necessary” but informationally unfruitful questions! Leave the routine stuff to HR and instead prioritize getting the answers to questions that are specific to evaluating this candidate in particular. HR isn’t going to have to live with having this person on their team, but they will likely be concerned about logistical stuff, so let them do their job and you can do yours more efficiently. If there’s no HR to speak of, a phone screen before the interview is the time for any banalities. To reiterate: focus on the substantial questions during the interview, and ensure that procedural stuff or paperwork doesn’t eat up valuable time when the candidate is actually in front of you.

If there are doubts about a candidate’s technical abilities or experience, have a quick way of testing in hand and be sure to notify the candidate that they will be tested beforehand. Once again, do not wing it. Remember that the candidate’s resume got them to the interview, so there’s no point in re-hashing the contents of the resume unless there’s a specific question that prompts the candidate to do something other than summarize what they’ve already written down for you. I highly suggest that questions directed toward the candidate are designed to shed light on the things which are not detailed in the resume or cover letter. The thought process and demeanor of the candidate are the two most important of these items.

Assessing the experience or thought process of the candidate can frequently be done by posing a simple “if X, then what is your choice for Y?” style question.  In this vein, consider that personal questions aren’t relevant except to assess the social qualifications of the candidate. Therefore, questions regarding the way that the candidate deals with coworkers are fair game. I highly suggest making questions toward the candidate as realistic as possible rather than abstract; abstract questions tend to have abstract answers that may not provide actionable information whereas real creativity involves manipulation of the particulars of the situation.

Aside from asking fruitful questions, the interviewer should take care with the statements which they direct toward the candidate. I will take this opportunity to explain a common and especially frustrating mistake that I have experienced interviewers making. As is self evident, the interview is not the time to question whether the candidate is suitable to bring in for an interview. To discuss this matter with the candidate during the interview is a misstep and is time that could be better spent trying to understand the candidate’s place in the team more.

To this end, it is counterproductive and unprofessional to tell the candidate that they are not technically or socially qualified for the position they are interviewing during the interview! The same goes for interviewer statements which explicitly or implicitly dismiss the value of the candidate. Interviews are rife with this sort of unstrategic and unfocused foul-play. This has happened to me a number of times, and I have witnessed it as a co-interviewer several times as well.

A red flag for a terrible interviewer is that they tell the candidate or try to make the candidate admit lack of qualifications or experience. Mid-level managers seem to be the most susceptible to making this mistake, and mid-career employees the least. It is entirely possible to find the limit of a candidate’s knowledge in a way that does not involve explicitly putting them down.  Voice these concerns to other interviewers before the candidate is invited in. If your company considers minimization of the candidate’s accomplishments as a standard posturing tactic designed to produce lower salary requests, consider leaving.

Aside from being demeaning, the tactic of putting down the candidate during the interview is frequently used by insecure interviewers who aren’t fit to be performing the task of evaluating candidates. There is no greater purpose served by intentionally posturing to the candidate they they are not valuable and are unwanted! Time spent lording over how ill-fit the candidate is for the position is wasted time that could be better spent elsewhere.

Don’t play mind games with the candidate– it’s immature, misguided, and ineffective. Such efforts are nearly always transparent and constitute an incompetent approach to interviewing based off of the false premise that candidates misrepresent their ability to do work to the interviewers, and so the interviewer must throw the candidate off their guard in order to ascertain the truth about the candidate.This line of thinking dictates that the “true” personality or disposition of the candidate is the target of information gathering during the interview. The habits and realized output of a person while they are in the mode of working are the real target of inquiry in an interview, so don’t get distracted by other phenomena which require digging but don’t offer a concrete return.

Typically, the purpose of these mind games is to get beyond the candidate’s presentable facade in an attempt to evaluate their “true” disposition or personality. This goal is misguided because the goal of an employee is not to have a “true” disposition that is in accordance with what their employer wants, but rather to have an artificial disposition that is in accordance with what their employer wants. We call this artificial disposition “professionalism“, but really it is another term for workplace conformity. I will note that professionalism is a trait that is frequently (but not always) desirable because it implies smooth functioning of an employee within the workplace. The mask of professionalism is a useful one, and all workers understand more or less the idea of how to wear it. A worker’s “true” or hidden personality is unrelated to their ability to cooperate with a team and perform work, if the deeper personality even exists in the individual at all. Conformity keeps the unshown personality obedient and unseen in the workplace, so it isn’t worth trying to investigate it anyway.

After the candidate has left, it’s time for a debrief with the team. Did the candidate seem like they’d be able to fit in with the team socially? If not, could the team grow together with the candidate? Did the candidate pass the relevant technical questions? Is the candidate going to outshine anyone in the team and cause jealousy? Did anyone have any fresh concerns about the candidate, or were any old concerns left unresolved despite efforts to do so? It’s important to get everyone’s perspectives on these questions. Report back on the answers to the questions that were agreed upon beforehand. If everyone did their part, there shouldn’t be much duplicated effort, but there should be a lot of new information to process.

Not all perspectives are equal, and not all interviewers are socially adept enough to pick up subtle cues from the candidate. Conversely, some interviewers will ignore even strong social cues indicated a good fit if their biases interfere. Interviewers have to remember that their compatriots likely had different experiences with the candidate– if they didn’t, effort was wasted and work was duplicated.

Is the candidate worth calling in for another interview, or perhaps worth a job offer right away? What kind of social posturing did the candidate seem to be doing during each interaction? What was their body language like when they were answering the most critical inquiries? Pay particular attention to the differences in the way that the candidate acted around different interviewers. This will inform the interviewers potentially where some of the candidate’s habits lie, and allow analysis of whether those habits will conform with the group’s.

If the interviewing process is really a priority, the interviewers will write down the answers to the above questions and compare them. How you process the results of this comparison is up to you, but if you don’t do the process, you’re not getting the most information out of interviewing that you could. If you take one concept away from this piece, it should be that teams have to make their interviewing efforts a priority in order to avoid duplicating questions, wasting time with posturing, and properly assess social and technical qualifications of the candidate.

If you liked this piece, follow me on Twitter @cryoshon and check out my Patreon page! I’ve been sick the past week (as well as involved in an exciting new opportunity) so I haven’t been writing as much, but I should be over my cold by Monday and back to regular output.

 

Why the Sharing Economy is Awful

Continuing with my thinking on late capitalism has brought me to consider the idea of the “sharing economy“. Many people seem to intuitively understand the gist of the sharing economy– people use information technology in order to facilitate other people’s renting of their stuff. Immediately, there is something strange: “sharing” does not mean “renting” in any other context except in the term “sharing economy”. The sharing economy is the renting economy; no ownership is actually shared, nor is any use actually “shared”, except in exchange for money.

If anything, the sharing economy refers to the mass choice of struggling workers to rent out the combination of their labor time and their expensive stuff to information technology companies. The “sharing” with the the end-users is the least relevant part of the story because the end users are actually just consumers finding their preferred product. Consumers are not participants in the particular economic theme of “sharing”, as they share nothing whatsoever, and instead buy the product as they desire it.

Typically, the sharing economy doesn’t provide a totally novel product to consumers, but rather a more convenient product than the traditional competition for the same product. The consumers for the product being “shared” existed before the sharing economy came along, so the demand was already there too. The consumers are finding the most efficient path for their money to turn into the product they want– a path that information technology companies have provided for them by creating an app which allows for mass utilization of capital that they do not own, using workers they do not hire.

In a time of weak economic demand, the incentive to generate revenue in is high as ever. There is strong pressure to keep costs down (precluding large capital purchases or development of brand new products) and cut unprofitable programs in order to keep revenue as strong as possible despite weaker sales. This poses a problem: how can revenue be generated reliably when demand is weak? To answer this question, we have to step back and examine how revenue is made under normal circumstances.

Revenue is produced by workers utilizing capital to provide something of value. Capital may be thought of abstractly as large quantities of money that can be transformed into physical objects which are used to produce more money, or it can be thought of as the objects that produce money themselves. Traditionally, capital might be a piece of factory equipment, and the owners of capital are the business owners. Capital may depreciate in value as it is utilized to produce revenue. Eventually, the capital may need to be revitalized or replaced.

In the traditional model, normal workers don’t own the capital that they utilize to produce revenue. The worker is paid a fraction of the revenue of the company– most of the revenue of any given company is used to maintain its capital and its workforce. It is the responsibility of the owner of the capital to provide wages to the worker who utilizes said capital to produce revenue. What remains after  maintenance of capital and wages is called profit. The profit may be used to purchase more capital, put in the bank, or paid out to workers or owners. The key takeaway here is that workers traditionally do not have any financial responsibility toward the capital which they utilize. The role of the worker is to utilize the capital in order to collect wages.

The difference between companies renting capital in the sharing economy and traditional companies producing the same good is critical. The traditional competition is likely to be burdened by upkeep costs in ways that sharing economy correlates are not– after all, traditional companies have to own and maintain the capital themselves in addition to retaining workers. Sharing economy companies typically find ways to use contractors instead of full time workers, reducing their operating costs by providing fewer benefits. The utilization of worker capital to produce revenue is quite an interesting development when paired with the rise of “contractor” style employment arrangements.

The most visible pillars of the sharing economy are AirBnB and Uber. I am not trying to suggest that these companies are “bad” for the economy. I use both of these services, and enjoy the products that they offer. I am suggesting that the sharing economy is detrimental to workers who are effectively forced to pony up their own capital before being allowed to participate in what amount to low wage, unskilled labor style jobs. What isn’t commonly understood is that the sharing economy is economically exploitative by allowing people to create revenue from their personal capital.

The sharing economy turns the traditional capital-and-revenue equation on its head. Instead of capital being owned by a company and utilizing workers to gain revenue from that capital, a company merely rents capital owned by the worker as part of the worker’s wages, offloading the up-front cost of capital and discharging the costs of capital maintenance to the worker. Revenues no longer flow toward the owner of the capital, but rather to the renter of the capital. After that, things function normally: workers are paid their static amount of the revenue, which is low despite bringing capital to the table.

The effect of the sharing economy is a part-time injection of previously untapped capital into the economic ecosystem. Common items which most people have (a spare room or car, for instance) can now be used as revenue-producing capital by their owners, who are likely strapped for revenue due to poor economic conditions. Thus the sharing economy allows workers short on revenue to rent out their capital alongside their labor, allowing them to have labor opportunities that they wouldn’t have otherwise– a very strong economic incentive. Instead of requiring capital sunk on credentials or time used to beef up a resume, workers in the sharing economy are merely required to lay a chunk of their capital on the table in order to start working. In some ways, this is good, as it allows people to work for wages that would otherwise not be competitive enough to get a job.

This reversal of the normal order certainly has many other benefits: the freedom afforded to those who choose to work as Uber or Lyft drivers is much higher than the median worker who must adhere to standardized hours and habits. The same could be said for the person who puts their spare room up on AirBnB. The income afforded to the workers of the sharing economy certainly keeps many people afloat– but broadly speaking, the sharing economy is an unequal economy because neither risk nor profits are shared.

Workers accept high risk to their capital from constant heavy utilization, and are not rewarded for it. Capital depreciation is likely, and is not compensated for by wages. Total losses of capital are not compensated for whatsoever. Instead, workers put a lot on the line in exchange for average wages whose rate does not increase despite large profits. Should the worker lose their capital, they are out in the cold.

Before the sharing economy existed, the capital of the lower classes was unreachable and reserved solely for personal use; in this sense, the sharing economy is a huge economic leap forward, as it increases the ability for wealth to flow, which broadly speaking, generates opportunity. Unfortunately, within the paradigm of the sharing economy wealth largely flows upward rather than circulates. It is unlikely that a worker participating in the sharing economy will make enough money to afford another capital purchase should their revenue-producing capital be destroyed by the process.

There is a case to be made for the sharing economy to be considered a system for transferring wealth from the lower economic classes to the owning class. The capital of the lower classes is used as a certificate signalling employment-worthiness, then is used to generate revenue for those who can afford to rent it out in mass to create products for consumers. The profits made are not returned to those who own the capital, but rather to those who own the information technology company which rents the capital. The owners of capital are in this situation bled at every step of the process and subject to high amounts of instability.

What’s a consumer to do? To start, do research and find out which sharing economy product provider is the most ethical. Paying workers better wages for ponying up their own capital is more ethical than the alternative. Finding out which companies bring on workers to be actual employees rather than contractors is also a good idea. Profit-sharing for workers and accommodations for worker capital loss and depreciation are items which are yet unheard of, and so are to be considered the icing on the cake.

If you liked this article, check me out on Twitter @cryoshon and also hit up my Patreon page!

 

 

How to be a Good Adviser by Playing Pretend

Upon leaving a job about a year ago, at my going away party one of my friends and coworkers asked me if I had any advice to pass on to the team. At the time, I stated that my advice was not to give generalized advice without a specific issue in mind, because it wouldn’t contain actionable information that would improve the receiver’s experience. With the benefit of time, I can see that there are a few more wrinkles to discuss regarding advising.

Most of my early experience with advising was from my school and university years. Later, I’d go on to advise my friends on their business ventures by asking questions then following up with more questions. I’ll disclose a caveat to my thinking on advising: I’ve never been so keen on asking for advice because of all the bad advice I’ve received over the years. My negative advising experiences have given me a lot of ideas to chew on, though.

There is a distinction between offering a piece of advice, and being an actual adviser, and for this piece I’ll touch on both, with an emphasis on the latter.  I’d like to revisit that sentiment and delve a little bit deeper. Before I do, a brief discussion of what advice is and what advisers are is in order.

Generally speaking, people are familiar with the concept of taking advice from others regarding areas outside their expertise. Additionally, people are usually comfortable with the idea of providing advice to others when prompted– and, frequently to the frustration of others, when they are not prompted. Advice is the transfer of topical information or data by a third party to a person looking for a good outcome. A large volume of our communications are offering, requesting, or clarifying advice.

The concept of advice as information will be familiar to almost everyone. Frequently, the topical information that is elicited by a request for advice is anecdotal. If the adviser is careless or not directed, the anecdotal information offered to the advised may merely be tangentially related or actually unrelated to the issue at hand. Not everyone pays close attention to their outgoing advice if they have no skin in the game. The main problem with anecdotal evidence is that it refers to specific instances of a trend rather than the rules which govern that trend. Yet, most advice is anecdotal, perhaps as an artifact of humanity’s sensitivity to personal stories rather than hard data or universal laws.

Informally, it’s nearly impossible to escape anecdotal evidence when requesting or giving advice. Frequently, an adviser will forgo telling the actual anecdote, and skip right to the advice that they have distilled from their own experience, leaving the advised with an even more incomplete view. This has predictable consequences when paired with people’s tendency to do as others tell them. Using an incomplete group of anecdotes culled from the experience of others and processed from an uncomfortable position of ignorance, decisions are made based on the emotions of others rather than clear-headed analysis.

I am sure nearly everyone has received completely heartfelt yet completely detrimental advice in their time. If we are lucky, we avoid the consequences of receiving bad advice and catch the mistakes of our advisers in time to reject their thoughts and prevent internalization. If we are unlucky, we follow the path to nowhere and are upset with the results.

Part of maturity is understanding that while others are capable of delivering bad advice, we too are likely to give bad advice if given the chance. We don’t have to commit to delivering advice if we don’t feel qualified, nor do we have to ask for advice or follow advice once given. Advice is just a perspective on an issue, and not all perspectives are equal.

Critically, good advice is specific and actionable rather than vague. If the best that an adviser can do is offer a general direction to follow up on, you’re outside the realm of their experience or outside the amount of effort they’re willing to invest in you. A typical red flag for bad advice is that it’s delivered quickly, sleepily, or nearly automatically.

Good advising is extremely effort intensive! Rid yourself of advisers that don’t respect you enough to apply themselves fully. In my experience, the prototypical awful adviser is coerced into the role rather than choosing it themselves. University advisers are the worst example of being forced into advising. Identify which advisers are around only because they’re required to be, and then avoid them and their bad advice.

So, how are we going to limit our ability to deliver bad advice and maximize our delivery of good advice? Should we simply stonewall all requests for advice and refuse to ask others for help? I don’t think that this is the answer, because advice is one of the principle ways in which we can share the experiences of others and make use of experiences that we have not had ourselves. Sharing experiences is a critical component to being human, and it’s unlikely that we could stop even if we tried.

The way that I propose to avoid delivering bad advice and to actually deliver good advice is to use a mind-trick on ourselves. The mind-trick that I am referring to is playing pretend. First, I’ll need to build a mental image of the thing I want to pretend to be– the best possible adviser– then when it’s time to give advice, I’ll be able to pretend to be the embodiment of the image and put myself in the correct mindset for delivering good advice. After I’ve built the barebones of this mental image, taking it out for a test run with a hypothetical request for advice will help to fill in the details and also provide a template for how to think when it’s time to deliver real advice.

What are the properties of this mental image of the ideal adviser? I think that the perfect adviser is a professorial figure, and so adopting an academic tone and patient, receptive train of thought is necessary. Advising someone else shouldn’t be careless or haphazard, so the perfect adviser should mentally state an intention to provide their undivided and complete attention to the pupil for the duration of the session. The aim is to achieve a meditative focus on the present where the power of the adviser’s knowledge and experience can act without interference. The adviser is never emotional. Value judgments are deferred or unstated; the details and the pupil are at the forefront.

In order to advise properly, this professorial type will know the limits of his knowledge as well as his strong points, and will weight his statements to the pupil in accordance with how much he really knows, making sure to be precise with his language and to qualify his statements. Reaching the limits of the adviser’s knowledge isn’t something to be ashamed of, as it’s an interesting challenge for the ideal adviser to chew on.

The aim of the perfect adviser is to consider the particular details of the situation of his pupil, relate them to the universal trends which the adviser has uncovered with conscious effort, and then use a combination of the universal trends and the particulars of the pupil to offer a prescription for action. The mental image of the adviser will explicitly recite the universal trends to himself as he ponders the direction to indicate to his pupil. The conversation between the pupil and the adviser is marked by long pauses as the adviser takes the time to call critical trends and details into his working memory so that the pupil may make use of them. Advising is a conversation that can’t be rushed, because the adviser might forget to make an important connection of communicate in a precise way. The best advising has no time limit.

With each stanza of conversation, the adviser will find that his idea of the prescription in progress is stalled by a facet of the pupil’s situation which hasn’t been discussed. The adviser asks deeply focused questions which will unblock the progress of making his advice draft. The draft will have to be completely reworked in light of information gathered from the pupil. Once the draft is completed, the adviser will ask validating questions to see whether their draft is workable and realistic. Upon validation, the adviser will deliver the draft in a reassuring yet detached fashion.

I actually use this mental image when I’m called on to give advice, and I think it helps a lot. “Playing pretend” is just a convenient way of stepping into a foreign mindset without getting too self conscious. The important takeaway here is that the mindset of being a good adviser is very different from our normal range of thought because it is both clinical and creative. Clinical in the sense that facts and particulars are recognizable within a general framework, and creative in the sense that the solution to the clinically described problem probably doesn’t have a pre-established treatment.

Advising is a skill that can be learned and perfected, though it’s seldom prioritized. I think that prioritizing becoming a good adviser is absolutely essential if you think that giving advice is a core part of what you do. For the most part, “first do no harm” is a maxim that I wish more advisers practiced. If you liked this article, follow me on Twitter @cryoshon and check out my Patreon page! I’ll probably revisit this article when I have a bit more experience advising.

 

 

How to Ask A Good Scientific Question

One of the first tasks a scientist or curious person must undertake before experimentation is the formulation and positing of a scientific question. A scientific question is an extremely narrow question about reality which can be answered directly and specifically by data. Scientists pose scientific questions about obscure aspects of reality with the intent of discovering the answer via experimentation. After experimentation, the results of the experiment are compared with their most current explanation of reality, which will then be adjusted if necessary. In the laboratory, the original scientific question will likely take many complicated experiments and deep attention paid before it is answered.

For everyone else, the scientific question and experimental response is much more rudimentary: if you have ever wondered what the weather was like and then stepped outside to see for yourself, you have asked a very simple and broad scientific question and followed up with an equally simple experiment. Experiments render data, which is used to adjust the hypothesis, the working model that explains reality:  upon stepping outside, you may realize that it is cold, which supports your conception of the current time being winter.

Of course, a truly scientific hypothesis will seek to explain the ultimate cause as well as the proximate cause, but we’ll get into what that means later. For now, let’s investigate the concept of the hypothesis a little bit more so that we can understand the role of the scientific question a bit better.

Informally, we all carry countless hypotheses around in our head, though we don’t call them that and almost never consider them as models of reality that are informed by experimentation because of how natural the scientific process is to us. The hypotheses we are most familiar with are not even mentioned explicitly, though we rely on them deeply; our internal model of the world states that if we drop something, it will fall.

This simple hypothesis was likely formed early on in childhood, and was found to be correct over the course of many impromptu experiments where items were dropped and then were observed to fall. When our hypotheses are proven wrong by experimentation, our response is surprise, followed by a revision of the hypothesis in a way that accounts for the exception. Science at its most abstract is the continual revision of hypotheses after encountering surprising data points.

If we drop a tennis ball onto a hard floor, it will fall– then bounce back upward, gently violating our hypothesis that things will fall when dropped. Broadly speaking, our model of reality is still correct: the tennis ball does indeed fall when dropped, but we failed to account for the ball bouncing back upward, so we have to revise our hypothesis to explain the bounce. Once we have dropped the tennis ball a few more times to ensure that the first time was not a fluke, we may then adjust our hypothesis to include the possibility that some items, such as tennis balls, will bounce back up before falling again.

Of course, this hypothesis adjustment regarding tennis balls is quite naive, as it assigns the property of bouncing to certain objects rather than to a generalized phenomena of object motion and collision. The ultimate objective of the scientific process is to resolve vague hypotheses into perfect models of the world which can account for every possible state of affairs.

Hypotheses are vague and broad when first formed. Violations of the broad statements allow for clarification of the hypothesis and add detail to the model. As experiments continue to fill in the details of the hypothesis, our knowledge of reality deepens. Once our understanding of reality reaches a high enough level, we can propose matured hypotheses that can actually predict the way that reality will behave under certain conditions– this is one of the holy grails of scientific inquiry. Importantly, a prediction about the state of reality is just another type of scientific question. There is a critical caveat which I have not yet discussed, however.

Hypotheses must be testable by experimentation in order to be scientific. We will also say that hypotheses must be falsifiable. If the hypothesis states that the tennis ball bounces because of magic, it is not scientific or scientifically useful because there is no conceivable experiment which will tell us that “magic” is not the cause. We cannot interrogate more detail out of the concept of “magic” because it is immutable and mysterious by default.

Rather than filling in holes in our understanding of why tennis balls bounce, introducing the concept of magic as an explanation merely forces us to re-state the original question, “how does a tennis ball bouncing work?” In other words, introducing the concept of “magic” does not help us to add details which explain the phenomena of tennis balls bouncing, and ends up returning us to a search for more details. In general, hypotheses are better served by only introducing new concepts or terminology when necessary to label the relation of previously established data points to each other. The same could be said for the coining of a new term.

Now that we are on the same page regarding the purpose of scientific questions– adding detail to hypotheses by testing their statements– we can get into the guts of actually posing them. It’s okay if the scientific question is broad at first, so long as increasing levels of understanding allow for more specific inquiry. The best way to practice asking a basic scientific question is to imagine a physical phenomenon that fascinates you, then ask how it works and why. Answering the scientific question “why” is usually performed by catching up with previously performed research. Answering “how” will likely involve the same, although it may encounter the limit of human knowledge and require new experimentation to know definitively. I am fascinated by my dog’s penchant for heavily shedding hair. Why does my dog shed so much hair, and how does she know when to shed?

There are actually a number of scientific questions here, and we must isolate them from each other and identify the most abstract question we have first. We look for the most abstract question first in order to give a sort of conceptual location for our inquiry; once we know what the largest headline of our topic is, we know where on the paper we can try to squint and resolve the fine print. In actual practice, finding the most abstract question directs us to the proper body of already performed research.

Our most abstract question will always start with “why”. Answering “why” will always require a more comprehensive understanding of general instances that govern the phenomena in question, whereas “what” or “how” typically refers to an understanding that is limited to a fewer instances. So, our most abstract question here is, “Why does my dog shed so much?”

A complete scientific explanation of why the dog sheds will include a subsection which describes how the dog knows when to shed. Generally speaking, asking “why” brings you to the larger  and more comprehensively established hypothesis, whereas asking “how” brings you to the more narrow, less detailed, and more mechanistic hypothesis. Answering new questions of “why” in a scientific fashion will require answering many questions of “how” and synthesizing the results. When our previously held understanding of why is completely up-ended by some new explanation of how, we call it a scientific revolution.

At this point in human history, for every question we can have about the physical world, there is already a general hypothesis which our scientific questions will fall under. This is why it is important to orient our more specific scientific questions of “how” properly; we don’t want to be looking for our answer in the wrong place. In this case, we can say that dogs shed in order to regulate their temperature.

Temperature regulation is an already established general hypothesis which falls under the even more general hypothesis of homeostasis. So, when we ask how does the dog know when to shed, we understand that whatever the mechanistic details may be, the result of the sum of these details will be homeostasis of the dog via regulated temperature.

Understanding the integration between scientific whys and hows is a core concept in asking a good scientific question. Now that we have clarified the general “why” by catching up with previously established research, let’s think about our question of “how” for a moment. What level of detail are we looking for? Do we want to know about the hair shedding of dogs at the molecular level, the population level, or something in between? Once we decide, we should clarify our question accordingly to ensure that we conduct the proper experiment or look for the proper information.

When we clarify our scientific question, we need to phrase it in a way such that the information we are asking for is specific. A good way of doing this is simply rephrasing the question to ask for detailed information. Instead of asking, “how does the dog know when to shed”, ask, “what is the mechanism that causes dogs to shed at some times and not others.”

Asking for the mechanism means that you are asking for a detailed factual account. Indicating that you are interested in the aspect of the mechanism that makes dogs shed at some times but not other times clarifies the exact aspect of the mechanism of shedding that you are interested in. Asking “what is” can be the more precise way of asking “how.”

The question of the mechanism of shedding timing would be resolved even further into even more specific questions of sub-mechanisms if we were in the laboratory. Typically, scientific questions beget more scientific questions as details are uncovered by experiments which attempt to answer the original question.

As it turns out, we know from previous research that dog shedding periods are regulated by day length, which influences melatonin levels, which influences the hair growth cycle. Keen observers will note that there are many unstated scientific questions which filled in the details where I simplified using the word “influences”.

Now that you have an example of how to work through a proper scientific question from hypothesis to request for details, try it out for yourself. Asking a chain of scientific questions and researching the answers is one of the best ways to develop a sense of wonder for the complexity of our universe!

I hope you enjoyed this article, I’ve wanted to get these thoughts onto paper for quite a long time, and I assume I’ll revisit various portions of this piece later on because of how critical it is. If you want more content like this, check out my Twitter @cryoshon and my Patreon!

How to Become a Smarty Pants

There’s been a small amount of interest that I’ve seen in a few communities regarding building status as an “intellectual” in the colloquial sense, and I think it’s probably more correct to say that people would rather be perceived as smart than as dumb, which is completely fair.

This article could also be called “How to Look and Sound Like an Intellectual” although frankly that implies a scope that is much larger than anything I could discuss. So, we have a lighthearted article which purports to transform regular schlubs into smarty pants, if not genuinely smart people. If you already fashion yourself as a smarty pants, read on– I know you’re already into the idea of growing your capacities further. Hopefully my prescription won’t be too harsh for any given person to follow if they desire.

While it seems a bit backward to me to desire a socially assigned label rather than the concrete skills which cause people to give that label to others, building a curriculum  for being a smarty pants seems like an interesting challenge to me, so I’ll give it a shot. I hope that this will be a practice guide on how to not only seem smarter, but actually to think smarter and maybe even behave smarter. The general idea I’m going to hammer out here is that becoming an intellectual is merely a constant habit of stashing knowledge and cognitive tools. The contents of the stash are subject to compound interest as bridges between concepts are built and strengthened over time.

In many ways, I think that being a smarty pants is related with being a well rounded person in general. The primary difference between being seen as an intellectual and seen as a well rounded person is one of expertise. The expertise of an intellectual is building “intellect”, which is an amorphously defined faculty which lends itself to making witty rejoinders and authoritative-sounding commentary. There’s more to being a smarty pants than puns and convincing rhetoric, though: smarty pants everywhere have been utilizing obscure namedropping since the dawn of society. Playtime is over now, though. How the heck does a person become a smarty pants instead of merely pretending to be like one?

Being a smarty pants is a habit of prioritizing acquisition of deep knowledge over superficial knowledge. Were you taught the theory of evolution in school? Recall the image that is most commonly associated with evolution. You probably picked the monkey gradually becoming a walking man, which is wrong. The superficial knowledge of the idea that humans and monkeys had a common ancestor is extremely common, but the deeper knowledge is that taxonomically, evolution behaves like a branched tree rather than a series of points along a line.

See how I just scored some smarty pants points by taking a superficial idea and clarifying it with detailed evidence which is more accurate? That’s a core smarty pants technique, and it’s only possible if you have deep knowledge in the first place. Another smarty pants technique is anticipating misconceptions before they occur, and clearing them up preemptively. How should you acquire deep knowledge, though?

Stop watching “the news”, TV, movies, cat videos, and “shows”. Harsh, I know– but this step is completely necessary until a person has rooted themselves in being a smarty pants. This media is intended to prime you for certain behaviors and thoughts, occupy your time outside of work, and provide a sensation of entertainment rather than enriching your mind. The more you consume these media, the less your mind is your own, and the more your mind is merely a collection of tropes placed there by someone else. Choosing to be a smarty pants is the same as choosing isolation from the noise of the irrelevant.

For the most part, these media are sources of superficial information and never deep information. You can’t be a smarty pants if you’re only loaded with Big Bang Theory quotes, because being a smarty pants means knowing things that other people don’t know and synthesizing concepts together in ways that other people wouldn’t or couldn’t. There is zero mental effort involved in consuming the vast majority of these media, even the purported “educational” shows and documentaries which are largely vapid. Seeing a documentary is only the barest introduction to a topic. Intellectuals read, then think, then repeat.

I guess I’ve said some pretty radical things here, but try going back and viewing some media in the light I’ve cast it in. There are exceptions to the rule here, of course: The Wire, The Deer Hunter, American Beauty, or an exceptionally crafted documentary. The idea is that these deeper works are mentally participatory rather than passively consumed; the depth and emotionality that the best audiovisual media convey can be considered fine art, and smarty pants love fine art. During your smarty pants training, I would still avoid all of the above, though. Speaking of your smart pants training…

Stop reading “the news”, gossip of any kind, Facebook, Twitter, clickbait articles, and magazines.  These things are all motherlodes of superficial information. As Murakami said truthfully, “If you only read the books that everyone else is reading, you can only think what everyone else is thinking.” This concept is absolutely critical because an intellectual is defined by depth of thought, quality of thought, and originality of thought relative to the normal expectation. Loading up on intellectual junk food is useless for this purpose, so get rid of it and you will instantly get smarter.

Noticed how I namedropped Murakami there? That’s worth smarty pants points because it’s conceptual tie in that is directly relevant to the point I’m trying to make, and expresses the idea more elegantly than I could on my own. Don’t just namedrop obscure people wildly, as you’ll look more like a jackass than a smarty pants, though the line is blurry at times. Being a fresh-faced smarty pants frequently involves making the people around you feel inadequate, but it shouldn’t when practiced properly!

The purpose of self-enrichment is for self-benefit, and should not be used for putting down others. Frequently, knowledge may be controversial or unwelcome, so begin to be sensitive to that when conversing with others. Life isn’t a contest for who can show off the most factual knowledge– but if it were, a good smarty pants would be in the running for the winner, and that’s your new goal.

Pick an area that will be your expertise. Pick something you will find interesting and can learn about without laboring against your attention capacity. This should be distinct from a hobby. Which topic you address is up to you, but I’d highly suggest approaching whatever topic you choose in a multi-disciplinary manner. If you’re interested in psychology, be sure to devour some sociology. If you’re interested in biology, grab some chemistry and physics. If you’re a philosopher, try literature or history. Your expertise in your chosen field will mature over time, and eventually you should branch out to gain expertise in a new field.

The idea here is that the process of picking an area of expertise is useful to the smarty pants. By evaluating different areas, the smarty pants will get a feel for what they’re interested in, what’s current, and what’s boring. The most intellectually fruitful areas of expertise have a lot of cross-applicability to other areas and concepts, an established corpus of literature, and a lot of superficial everyday-life correlates. Suitable examples of areas of expertise are “the history of science” or “modern political thought”. An unsuitable example of an area of expertise would be “dogs” or “engine design”. Unsuitable areas of expertise aren’t applicable to outside concepts and don’t confer new paradigms of thought.

Start reading books, in-depth articles, and scholarly summaries on topics which you want to develop your expertise in. A smarty pants has a hungry mind and needs a constant supply of brain food, which is synonymous with deep knowledge. Reading books and developing deep knowledge is never finished for the aspiring smarty pants. Plow through book after book; ensure that the most referenced scholarly works or industrial texts are well-understood. Understand who the major thinkers and groups are within the area of expertise, and be able to explain their thoughts and relationships. Quality is the priority over quantity of information, however.

Merely stopping the flow of bad information in and starting a flow of good information isn’t enough to be a real smarty pants, though it’s a good start. In order to really change ourselves into smarty pants, we must change our way of engagement with the world. As referenced before regarding media consumption, a smarty pants must interrogate the world with an active mind rather than a passive mind. What do I mean here?

A passive mind watches the world and receives its thoughts as passed from on high. Passive minds do not chew on incoming information before internalizing it– we recognize this the most pungently when a relative makes regrettable political statements culled directly from Fox News. An active mind is constantly questioning validity, making comparisons to previous concepts, and rejecting faulty logic. An active mind references the current topic with its corpus of knowledge, finding inconsistencies.

Creating an active mind is an extremely large task that I’ll probably break into in another full article, but suffice it to say that the smarty pants must get into the habit of chewing on incoming information and assessing its value before swallowing. Learning how to think/write systematically and disagree intelligently are probably both skills that a smarty pants can make use of.

Speaking of relatives, a smarty pants needs to have good company in order to grow. Ditch your dumb old friends and get some folks who are definitely smarter than you– they exist, no matter what you may think of yourself. You don’t really need to ditch your old friends, but you really do need to get the brain juices flowing by social contact with other smarty pants. There are many groups on the internet which purport to be the home of  smart people, but my personal choice is HackerNews.

It’ll hurt to feel dumb all the time, but remember that feeling dumb means that you are being exposed to difficult new concepts or information. Feeling dumb is the ideal situation f0r an aspiring smarty pants because feeling dumb means that you are feeling pressure that will promote growing to meet the demands of your environment. Every time you feel dumb, catch the feeling, resolve the feeling to an explicit insecurity, then gather and process information until that insecurity is squashed by understanding. Like I said before, this step is unpleasant, but nobody said being a smarty pants was easy.

This concludes my primer on how to be a smarty pants. I’ll be writing more on this topic, though a bit more seriously and more specifically. I’d really like to publish a general “how to think critically” article in the near future, and of course critical thinking is a core smarty pants skill. I have a reading list for the most general and abstract “smarty pants education” that I’ll be publishing relatively soon as well. Until then, try practicing the bold points here.

Be sure to follow me on Twitter @cryoshon and check out my Patreon page!