Silicon Slopes Tech Summit 2019

Day 1 of the 2019 SSTS is in the books and my favorite session from the day was a panel discussion on The Future of AI Talent in Utah. Not because I know anything about AI, but because one can apply the points made by the panel across many disciplines.

When answering a question about talent in Utah, the panel consensus was something along the lines of “Most applicants look the same—1% of the applicant pool are getting all the offers.”

I suspect this is somewhat exaggerated, but it begs the question: What makes that 1% special? How do people differentiate themselves?

What did the panelist say to the young crowd looking at AI as a career path?

  • Don’t rely on “cookie-cutter” (or in software terms “hello world”) projects to demonstrate competency.
  • Show a passion for solving problems.
  • Demonstrate your capabilities by publicly sharing your work—in the area of software development, maintaining a public Github repository for one or more projects. For those of us not in the software space, publishing in a reviewed, online journal would be an option.
  • And while we’ve all heard the college dropout success stories, having a strong educational foundation and simultaneously learning how to solve NEW problems—the ones not in the textbook today—is needed to stay competitive.

These recommendations sound like good advice for everyone looking to differentiate from the crowd.

How long do you need to learn something new?

Or build on something old?

A recent article from the Inside Higher Ed highlighted how experimentation in the delivery of online courses are driving the discussion on what the proper length of a class should be.

The familiar 12 to 15-week blocks align with my experience, and it was only after starting my position at the University of Utah where I realized this was no longer the norm. In my department, two, 3-credit, semester-long courses, were broken up (long ago) into six, 1-credit, 5-week courses. For upper division and graduate level classes, other departments offer “first-half” and “second-half” courses during the traditional semester allowing students the opportunity to broaden their experiences as they can select from a broader range of topics than what might otherwise be available. What has been the shortest course length? A single 3-credit course over five days (8:00 am to 5:00 pm), with a caveat that readings and assignments are due before the first day of class (i.e., there is pre-work involved) and they should expect additional homework each evening.

Most academics consider the last example extreme; however, this model is typical for professional development in many industries. I was fortunate to work in a company that valued professional development and participated in two courses—each taught as full days over a single week—that were similar to a university course. While not graded in the traditional sense, managers have to approve the cost of the course and weigh the loss of immediate work against the promise of improved productivity in the future. Good luck getting additional professional development approved if you cannot demonstrate benefits from your previous development courses. One of the biggest challenges in professional development is getting people to focus on the course and set aside the distractions of work—easier said than done.

So, back to my original question: How long does it take to learn something new or to build upon a previous skill? Can this be done in a single week? Or does it take three-plus months? For a traditional course, the mantra is two to three hours of study per credit hour—for three hours in the classroom each week, the expectation is a minimum of six hours of work outside — a total of 9 hours per week or 135 hours over the 15-weeks of a traditional semester. Assuming that the class-to-study ratio is closer to 1:1, the total time is 90 hours; depending on the topic, 90 hours would be manageable, and this could be more viable with structured pre-work. Of course, one is not an expert at this point but has obtained a level of competency with the subject. As a “self-directed” learner, 45 to 90 hours is a good approximation of the time needed for learning as I’ve built up various skill sets.

Could this type of intense schedule work? Would it be possible to take a three-course calculus series over 15 weeks if that was the focus? Probably, but we also need to consider the instructors. University faculty members need time outside the class for non-teaching activities: research, service, administration, course preparation, and advising are the most visible out-of-class activities expected at the modern university, and this out-of-class engagement is needed. But there might be some appeal to faculty as completing a teaching assignment in five weeks may open up opportunities for focused work during the rest of the semester.

If universities can exploit technology to maximize the high-value activities of their faculty, the traditional classroom will change, and it may reflect the time-intensive learning environments used by industry for professional development. It is worth exploring as the need for life-long learning will force us to become more efficient in education.

Lifelong Learning

The concept of continual improvement is an established business practice in today’s economy. The idea was founded in the statistical process control methods developed by Walter A. Shewhart of Bell Labs and later generalized by W. Edwards Deming into the PDSA cycle:

  • Plan: What are the desired outputs? What can be changed to achieve the desired goal?
  • Do: Implement the plan and gather data.
  • Study: Review the outcomes based on the collected data (more commonly called the “Check” phase).
  • Act: If the outcomes were meet, act to make the plan the new standard.

Of course, when completed, we return to the planning phase and look for further improvement to continue the cycle. Are we communicating this idea to students and employees? Can we apply this principle to the concept of “lifelong learning?”

Unwritten rules?

Working with graduate students in higher education is a satisfying experience, and it is rewarding to watch their personal growth as they progress through their program of study—once a student is in our program, I see it as my job to provide guidance and make sure they don’t get stuck. It is upsetting to the student, but also to me when they get stuck or leave the program because they fail to follow a documented process. Even though there may be a breakdown in the process, individuals are responsible for becoming knowledgeable about the requirements that impact them. But what about undocumented procedures or “rules?” I struggled this week to resolve a complicated issue and part of the problem—I figured out at the end of the process—was that I did not have all the information. There were unwritten rules.

At the height of my frustration, I thought this was a problem only found in academics; however, after giving myself time to reflect, I realize it is a problem in industry settings as well.

So my question is why? Why do we have unwritten rules?

Of course, it could be that there are no rules, just guidelines that are intended to be flexible based on the situation. This distinction is an important point. Guidelines provide flexibility for both parties in a negotiation which may not be the case with rules. Until it is written down, I would argue any process is just a guideline and negotiation is an option.

Where the rules are documented, there may be unwritten exceptions—situations that warrant deviation from the normal process. Unwritten exceptions recognize that managers (or administrators) encounter problems that deserve consideration based on the merits of the case. But why are they unwritten?

I think this is an issue of trust. Do you trust people not to abuse the exceptions? In organizations that work with high levels of trust, a defined process that evaluates exceptions remove the possibility of arbitrary application or favoritism.

Documenting exceptions provides a level of transparency and levels the playing field for those not inclined to challenge the status quo. Individuals who are reluctant to ask the question “why?” effectively lose out on options that are available to those willing to test the system. I doubt this the intent, but it can be the result.

As managers, we have a choice on how we set up the rules. I would argue that we should write them down and strive for a transparent and fair process.

Is the sum of the whole greater than the parts?

Professionals don’t receive letter grades at work (at least, I’ve never received one). During our annual reviews, we might see statements such as “meets expectations,” or “exceeds expectations”—and hopefully not “needs improvement”; however, the summary statement is, or should be, part of a larger dialog on what went well over the past year, what didn’t, and what are the expectations for the next year.

Compare the way we evaluate work performance with education. For employees, we assess performance over a monthly, quarterly and yearly time frame. Students have a fixed time—days or weeks—to master a new set of skills and it is pretty much “all or nothing.” For employees, we have a simple metric, is the work getting done and meeting expectations! For students, homework is assigned, exams are given, and the answers are graded, and at the end of a class, we assume a transfer of knowledge.

At work, we are expected to improve and develop competencies continuously, and while most follow an annual review cycle, the tasks determine the schedule. The team requires a short time frame for some jobs, and others take months; often, the ability of the individual or group drives the plan. “How long is it going to take?” When the answer does not match the need, we look to adjust the scope or budget.

We have a fixed schedule in education. 15 chapters or five novels and reviews or three research papers—in 15 weeks regardless of ability or prior training. For mathematics and the sciences, knowledge is assumed based on the previous courses taken, but what if the student did not obtain competency? Can we expect that a C stands for competent?
 We assume that the sum of knowledge obtained in individual courses is greater than the parts.

As someone who has hired people to do work, the standard resume is limited in what it tells me about what the individual can do. Admittedly, some folks can craft two-pages that make a compelling case to pursue an interview, but we don’t know what someone can do unless we’re fortunate enough to see their work. This need makes hiring within your network compelling—you often have seen the applicants work, or at least have personal connections with those who have. Making the decision is expensive for the employer and fraught with risks—will this person be able to do the work, will they be able to integrate into the team, will they be able to contribute to future projects.

Educators have a responsibility to help students document how their skill sets address these concerns, and it’s not by issuing a diploma and a transcript. We need to help students document the competencies they acquire during the course of their education.

Don’t let my timeline limit your path forward

Applying the mantra of “what can I do now” is an effective way to move forward on long, complex projects that may be stuck or delayed. So, before becoming fixated on the finish, focus on how to keep moving so you can get started.

Following that . . . Academic calendars are not very flexible.
K-12 education has been on a fall/spring schedule for over 100 years and, surprising to me, an agricultural calendar was not the driving factor. (https://www.pbs.org/newshour/education/debunking-myth-summer-vacation)

For better-or-worse, higher education follows a similar calendar although most public colleges and universities now allow undergraduate students to start a program during fall or spring or summer (i.e., they have rolling enrollment) but this is not the case for most graduate and professional programs where a new cohort is formed each year. If someone wants to start a professional program and they miss the application deadline, they will look at the calendar and think there is nothing they can do until next year—don’t let the institution’s schedule prevent you from moving forward.

In the fall, I get emails asking if it’s too late to enroll—and at this point, it is for our program. However, for those wanting to start a graduate degree or certificate program, and it is past the official application deadline, I’ve recommended that they take the opportunity to identify possible gaps in their education or training that may come up during the application review when they do apply. Admission committee’s look at many factors (GPA, letters of recommendation, statement’s of purpose, etc.); however, most are trying to answer a simple question—will this person be successful in our program. Here a few items I’ve recommended to potential students as they navigated the application process.

Enroll in undergraduate courses (for credit) as a non-matriculated student to address gaps between your undergraduate degree and the graduate program. (A non-matriculated has permission to register but is not formally working toward a degree.) Earning a “B or better” can demonstrate readiness for more advanced work in the field and can effectively offset concerns that admission committees may have if a transcript shows underperformance as an undergraduate. If your undergraduate work was sufficient and you meet the pre-requisites then . . .

Ask for permission to enroll in a graduate course (for credit) as a non-matriculated student. If you are able to take a class associated with the program of study, it may be counted towards the graduate degree. But be warned, Colleges and Universities have strict rules for if this may be done and, if it is allowed, how many credits earned as a non-matriculated student can be applied to the degree or certificate.

My advice is pretty simple, even though you can’t start a program now, you can move forward and what looked like a one-year delay may become less than six months.

Building New Skills

There is no better gig than getting paid to learn.

In August of last year, I decided to take on the challenge of teaching an applied statistical techniques course—one small problem—I’m not a statistician. Like most scientist and engineers, I’ve used (and abused) statistics all my entire career, so I do have experience with the concepts. Also, my math skills are not too shabby as I have incorporated calculus, differential equations, and other advanced topics into my work over the years. (Yes, some people do use algebra.) The good news? I had about four months before the first day of class and enough control over my schedule where I could commit 10+ hours per week to develop the course content. The real challenge was to create the course using R, a language, and environment for statistical computing and graphics. Being open about my own abilities, I was not proficient with this language.

With this in mind, I set out to create the course—with emphasis on the “applied” and “techniques.” Starting in the fall, I developed a module that students would work through each week. I estimated my 10–12 hours of work each week would translate into the three hours of material for each class and this worked out surprisingly well.

The course is coming to an end this week, and the best part of the experience was how much I learned (or maybe relearned). But most surprising, it was how much I learned the SECOND time I worked through the materials during the weeks I taught the course.

I classify skill sets at three levels:

  • novice: apply basic principles to solve structured problems,
  • hack: gather external resources to solve moderately complex problems,
  • expert: apply advanced principles to solve complex problems with the minimal use of external resources.

I still consider myself a hack when it comes to performing statistical analysis using R, but having the opportunity to expand my own skill set and providing a framework for others to learn something new—that was a great gig.

Computer science for everyone!

Really? For everyone?

I think I agree with the idea of teaching computer science in every Utah school as presented in a recent Silicon Slopes post; however, I wasn’t sure what that means.

What would universal computer science education opportunities for K-12 students in Utah include? Afterall, computer science is a broad discipline which can cover topics as:

Algorithms and Computational Geometry
Architecture and VLSI
Data, Databases and Information Management
Formal Methods and Verification
Graphics and Animation
Image Analysis
Human-Computer Interaction
Machine Learning and Natural Language Processing
Networking, Embedded Systems, and Operating Systems
Parallel Computing
Programming Languages and Compilers
Robotics
Scientific Computing
Visualization

Which of these topics are appropriate for K—12? (I compiled the list above from the University of Utah’s department of computer science website.)

I would assert that, currently, only a small fraction of K—12 students participate in Computer Science as defined by the topics above. Robotics is the most obvious example where we see activity, but because of student interest and not part of a broader curriculum. Robotics is an interesting example because it encompasses multiple aspects of computer science (as defined above) including human-computer interaction, programming, networking, and embedded systems. The depth of knowledge needed for each of these will depend on the project; however, it is an example where teachers can integrate multiple projects into the K—12 curriculum.

But would an area like robotics be helpful to all students?

Perhaps a better idea is to add a “computer science” learning objective to the current core curriculum. What would be possible in science courses today?

Biology, Chemistry, Physics, could all incorporate areas such as data visualization and basic scientific computing (i.e., using computers to solve problems), but working with programming languages, compilers, parallel computing, formal verification?

Before jumping on the computer science bandwagon, we need to ask a straightforward question: What knowledge do want students to demonstrate upon graduation from high school?

My observation is that students receive very little formal training on what we would probably claim as necessary computer skills: writing using a word processor, working with spreadsheets and databases, etc. How much of the Microsoft Office (or LibreOffice or GoogleDocs or “insert your favorite, regular computer tasks here”) should students know when they leave high school? These skills are not computer science, but they are useful tools to have in one’s toolbelt.

Do schools offer “typing?” During my first university teaching position, I asked students to type their reports. One student told me directly that she went to a private school and didn’t “type.” (In a voice indicating that she believed typing was below her—this statement was made the late 90s, so computers were not ubiquitous as they are today.) To this day, I am happy I took typing in high school; it has made work using a keyboard easier. Would this be considered an essential skill needed for computer science? Typing, as taught before typewriters became extinct, was targeted towards vocational workers; has it risen in status? I think the proper term today is “keyboarding;” however, is it still treated as a vocational course? Probably so.

If we advocate teaching computer science in K-12 classrooms, we need to define what skills we want students to master accurately. These skills are not going to be determined by the computer science departments; they’re going to be set by the disciplines who have adopted technology as part of their work. In math, students should have the opportunity to explore mathematical equations by defining the function, ranges, etc., as they explore new concepts, but only after understanding how to work through the problems manually. Biology, chemistry, and physics can all implement data analysis, scientific computing, visualization, and other topics into the classroom, but it can’t be at the expense of building a basic understanding of the underlying principles. Building a solid foundation is critical before jumping into complex problems that need the tools of computer science.

I hope someone is carefully thinking this through.

What would be the cost of public financed college?

For the taxpayers of Utah, 1% —1.5% is my best guess.

Bernie Sanders promoted “free college” during the 2016 Democratic Primaries; unfortunately, Senator Sanders wasn’t able to articulate that message into a winning strategy. Democrats, in general, are failing to capitalize on an issue that should resonate with the majority of Americans.

Who is concerned about college cost? If it’s not everybody, it should be. There are nearly 120,000 students enrolled in Utah’s colleges and universities. With approximately 900,000 households, we can estimate that more than 1 in 10 households have a college student and are directly impacted by the cost of college. I would wager that families with school-age children (K-12) are equally concerned. Add grandparents to the equation,  what percentage of Utah households are concerned? You and all these people — hundreds of thousands if not a one to two million — who want the next generation to succeed.

If the cost of college were negligible, wouldn’t the majority of these people (parents, grandparents, aunts, uncles, neighbors) consider college a worthwhile option?

If the definition of a public institution is one that receives the majority of its funding from the public, then the state of Utah is on the verge of losing its public institutions of higher education (Table 1).

tution and tax funds per utah higher ed TABLE 1

So, I want to pose a question: What would it cost to fund public higher education fully?

In Utah, the total FY 2016 budget was $14.2 billion; 12% of that was for Higher Education — approximately $1.7 billion. This amount is the Operating and Capital Budget. Students currently spend roughly $680 million in tuition and fees (Table 2). To fully fund higher education would require increasing the amount of money spent on higher education by 40% — but this is less than a 5% increase in the total state budget.

tution and tax funds per utah higher ed TABLE 2

What would this mean to taxpayers? Census estimates show Utah having just over 900 thousand households in 2016; the median household income was $61,000. A back-of-the-envelope calculation would have the typical family pay less than $800 per year which would amount to a 1% — 1.5% increase in the income tax rate (currently at 5%). If you are a parent who wants their child to go to college, this is a bargain. For those of us whose children will be out of college, it’s a price I’d be willing to pay to provide qualified students the opportunity to succeed along a path with known financial benefits.

Is this a reasonable estimate? (It’s a start.)
Is this feasible? (It seems possible to me.)

It’s not free college — taxpayers will pay the bill, but it’s the ideal of public education and investment in our state we should consider.

First-year Environmental Chemistry After 30 Years

As both of my daughters are now in their first year of college and having a Ph.D. in Chemistry, there is an assumption that you can be a helpful resource with basic chemistry. (For the record, this is not a safe assumption.)

In my efforts to be helpful, I pulled out my first-year Chemistry text. In the spring of 1987 I was completing my second semester of Chemistry, and as I reviewed the old class syllabus, I noticed that Environmental Chemistry was one of the chapters covered. In thinking about the course, I specifically remember Professor John Hubbard making the analogy that the environment was like a buffer. I don’t recall the particular system, but the analogy applies to both the atmosphere and oceans.

The definition of a buffer solution is pretty simple; it’s a solution that resists change in pH upon addition of either an acid or a base. In a broader sesnse, we use the term to describe any system that resists change upon addition of a compound that would alter the equilibrium of a system.

A single section of the chapter discussed the topics of acid rain, photochemical smog, carbon monoxide, and climate. Within this text, a single paragraph summarized the role of carbon dioxide and its role in maintaining surface temperatures. Within this one paragraph, there was the warning “If the calculated effect of doubling of CO2 level on the surface temperature is correct, this means that the earth’s temperature will be 3 degrees C higher within 70 years.” (Chemistry: The Central Science, T. Brown and H.E. LeMay, Jr., 3rd ed., Prentice-Hall, Englewood Cliffs, 1985, p. 393.) Current CO2 levels are 405 ppm (parts per million) compared to 330 ppm as referenced in the 1985 text.
(https://www.scientificamerican.com/article/atmospheric-carbon-dioxide-hits-record-levels/) A 23% increase.

I was curious… can I see this prediction in data from my home locale of Salt Lake City, Utah?

I pulled a simple data set from NOAA’s website–annual averages from 1948 through 2016. Here are the data and a simple analysis.

SLC annual temperatues 1948-2016Annual Average Temperature (°F) for Salt Lake City, UT
(1948⎯2016)

It’s pretty remarkable. Over the past 69 years, the average annual temperature is increasing at a rate of 0.05 °F/year (0.028 °C/year).

The average (mean) value over this period is 52.4 °F (11.33 °C) with a 95% lower and upper confidence limits of 52.0 °F (11.33 °C) and 52.8 °F (11.54 °C), respectively. The top and bottom traces on the graph show the 95% prediction intervals.

So what does this mean? If we look at temperatures from 2012 through 2016, they all fall inside the 95% prediction intervals. So, no problem! (Right?)

But go back to the initial premise that the atmosphere is a buffer, when will we know that we’ve exceeded the “buffer capacity” for CO2?

buffer example rev00An example of a buffer curve showing the variable
under observation versus percent completion.

And that’s the problem, we don’t know how much of the buffering capacity we’ve consumed, and we probably won’t know where we are on the curve until after we’ve reached a tipping point, and temperatures accelerate beyond the slow, apparently linear trend we observe today.