Sign in to view Chris’ full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Boulder, Colorado, United States
Contact Info
Sign in to view Chris’ full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
353 followers
344 connections
Sign in to view Chris’ full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View mutual connections with Chris
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View mutual connections with Chris
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Sign in to view Chris’ full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Experience & Education
-
Grafana Labs
****** ******** ********
-
******
******/********* ******** ********
-
***********.**
****** ******** ********
-
******** ****** ** *****
** *********** *******
-
View Chris’s full experience
See their title, tenure and more.
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Publications
-
Reduction of heat capacity and phonon group velocity in silicon nanowires
Journal of Applied Physics
First author of a computational physics publication on the effect of phonons on heat capacity in silicon nanowires
Honors & Awards
-
Hackathon Winner
-
Won a Hackathon award for a group project making a proof of concept @mentions feature
View Chris’ full profile
Sign in
Stay updated on your professional world
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Other similar profiles
-
Bob Cotton
Denver, COConnect -
Vernon Miller
Erie, COConnect -
Geoffrey Morgan
Flagstaff, AZConnect -
Alexander Weaver
Austin, Texas Metropolitan AreaConnect -
Fred Lintz
Englewood, COConnect -
Annie Clymer
Cross Product Technical Support Specialist at Alchemer
Broomfield, COConnect -
Ronald Roe
Choctaw, OKConnect -
Connor Kerry
Santa Barbara, CAConnect -
Adam Pierce
San Diego, CAConnect -
Daren May
Littleton, COConnect -
Carina Sweet
Denver, COConnect -
Michael Vienneau
Denver, COConnect -
Zach DiMarco
Staff Engineer
Thornton, COConnect -
nonye Okeke
React Native Developer at HABA INSURETECH
NigeriaConnect -
Julien Duchesne
Senior Software Engineer at Grafana Labs
Saint-Antoine-de-Tilly, QCConnect -
Rob Whelan
LimogesConnect -
Jev Forsberg
Denver Metropolitan AreaConnect -
💡Oliver J. White
CzechiaConnect -
Claire Liu
Mountain View, CAConnect -
Ben Mier
Greater Seattle AreaConnect
Explore more posts
-
Joshua Ellinger ✨
🌟 Attention Engineering Managers and Technical Leaders! 🌟 In just two weeks, on June 28th at 10 AM PDT, I’m excited to host an interactive roundtable on Feature Flags and Delivery in conjunction with ELC! 🚀 Why Attend? This is a collaborative discussion where your expertise and curiosity are key! Whether you're experienced or eager to learn, your contributions are invaluable. 🔍 Discussion Points: - Decoupling Processes - Boosting Team Efficiency - Optimizing Feature Delivery - Tracking Platform Usage Expect rich conversations on best practices, unique implementations, methodologies, and the latest tools enhancing feature flagging and delivery. Who Should Join? - Engineering Managers - Technical Leads - DevOps Engineers - Software Developers This roundtable is your chance to: - Share your experiences - Gain new insights - Network with peers - Discover practical recommendations RSVP Now and mark your calendars for June 28th at 10 AM PDT! Let’s collaborate and elevate our skills together! #EngineeringManagement #TechLeaders #FeatureFlags #DevOps #SoftwareDevelopment #TeamEfficiency #TechRoundtable
9
-
John Mitchell
I often run into Business types or even Developers who are kind of fuzzy on what DevOps is. The TLDR I give is "you know your application and features? DevOps is *everything else*: databases, pipelines, permissions, networks, domains, certificates, architecture..." Here's a quick and breezy high-level explanation by National Treasure Nick Janetakis -- https://lnkd.in/gdMFCCbh
2
-
James Rosen
I have a question for software teams that meet ISO-27001 / SOC 2 Type II *and* who use outsourced software development with git. Do you have all your contractors sign all your policies so they can contribute to your codebase directly? Or do you have the outsourced team work in a separate git repository and then have an employee merge to the main repo? Or some other model?
3
-
Casten Riepling
Ok, so the Rabbit R1 is a little device running an Android app. Which begs the question: Why not just ship it as an app for phones? From a marketing perspective, it's hard to imagine commensurate hype coming from an app alone. From a customer perspective, it's hard to imagine the experience offered by the dedicated device is worth the burden of lugging it around. At least the Humane pin is always out and not an extra thing to carry around. That being said, I love Teenage Engineering designs. Can I get some likes for a Teenage Engineering Phone?!?! https://lnkd.in/gvMqHj7N #rabbitr1 #teenageengineering #humane
4
2 Comments -
Haris Amin
First off Larry Diehl is one of the brightest ppl I ever had the pleasure to work with. His technical insights and experience are only matched by his pragmatic approach in building products. I've personally witnessed and experienced him doing so in complicated dense domains and his discipline, awareness, and ability to navigate complex + dense topics (e.g. type theory and formal verification) while keeping a pragmatic product development roadmap is truly astonishing. Secondly, I've had the privilege of seeing Larry build Colimit in real-time! It truly is a seamless gateway to bring reliability and thoroughness to apis backed by solid and principled fundamentals. Like it or not...everyone building APIs IS building distributed systems. However, not all of us have the time, expertise, and skills to build and test them reliably. Larry Diehl and Colimit are paving the way to bring that to the masses. It's truly a space that needs more love and attention. Hoping to bring Colimit to some of our teams in due time too! Super excited for what's to come! #formalverification #distributedsystems #apis
14
1 Comment -
Jeremy Manson
Been thinking a bit more about AI and programming. In a previous post, I mentioned that I was worried about the "uncanny valley" effect. For those who don't know, the phrase "uncanny valley" was coined to refer to people's reactions to human-looking robots. We aren't bothered at all by robots that don't look like human beings (think R2D2), or robots that look a bit like human beings (think C3PO). We aren't bothered by robots that look exactly like human beings (think Blade Runner). We *are* bothered by something that looks a lot, but not quite, like a human being - it doesn't blink enough, it's skin is the wrong texture, and so on. It's that dip in the middle - where it's almost, but not quite, human - that's referred to as the uncanny valley. I'm sure someone has pointed this out before, but I think the same curve applies to the usefulness of AI. As an assistive technology, when you know it is being helpful, but can't take over for you, it's useful. Think autocomplete, or grammar checking. As something that can completely replace a human being, it's useful. You can argue that we're getting close to that with, say, image recognition (leaving aside problematic biases). There is a similar valley between these two, where human beings think that AI can take over for them, but it can't. (This effect probably already has another name, but I don't know what it is, so I'll live with "AI Uncanny Valley".) This AI Uncanny Valley can lead to funny results, as it does for the law clerks who have ChatGPT prepare legal briefs, only to have judges point out that they are citing case law that doesn't exist. It can also lead to tragic results, as it does when people trust their lives to Tesla's "Full Self Driving" mode and get killed. But the bad outcomes aren't always so obvious. One of the strategies that gets proposed for AI-driven software development is to have users describe the outcomes they want - feed the system a list of tests that have to be passed - and let the AI develop code that will make the outcomes happen. To the user, passing all of the tests would make it seem like the AI did what it was supposed to do. The problem is that human beings are really, really bad at coming up with tests. I watched a talk (see below) by researchers at NC state recently, who did a study of novice programmers. They found (to no one's surprise) that novices undertested their code in general, overtested the parts of the code that worked, and that their tests often mismatched requirements. They recommended a checklist of best practices for test developers, of the same kind that surgeons use before an operation to prevent infection. Imagine a world where this is what drives software development. Novices state requirements, badly. Code is generated that meets those requirements, but no one wrote it, and no one understands it. Code lives in that uncanny valley and no one ever notices. And we think we have software quality problems today... #AI #testing
5
2 Comments -
Oren Ellenbogen
Staff+ Engineers own and nurture missions, not features. Mission: "I own turning our systems downtime from minutes and hours to seconds." Feature: "I own adding a Circuit Breaker to enable faster recovery time for this component" Missions are a narrative. Something you can go to sleep with and wake up in the morning thinking about how to make things drastically better. Features are tiny stories and milestones in this journey. Missions are what you're proud of when you'll look at your impact 10 years from now and won't recall all the small details. Missions are a way to state purpose in a given context (role, company, team, product, timing, etc.) If you don't have enough missions, you don't need many Staff+ Engineers. If Staff+ Engineers talk in features, they'll get bored quickly ("did it already") and not develop the skills to own bigger missions with more ambiguity. This is often a lack of expectations setting where the manager and the Staff+ engineer didn't align on responsibilities. If a Staff+ engineer cannot own and lead a mission (with high agency), by setting direction, creating buy-in, dealing with friction and executing well - they're not ready yet. Great Staff+ engineers discover missions leveraging their business understanding and technical skills. They create opportunities for themselves and others while aligning the outcome to serve the customers and the business.
200
10 Comments -
Emily Nakashima
Honeycomb's storage engine, Retriever, is the key differentiator behind so much of our product. I sometimes get questions from other engineering leaders about why we didn't use x or y off-the-shelf technology or product instead. There are a number of reasons I could share, but one of the most important ones is that it's usually hard to evolve someone else's offering (even if OSS) as the constraints of your business change over time. This post from Hazel Edmands is such a fantastic deep dive into our most recent round of doing just that. https://lnkd.in/g9mvvHWW
120
3 Comments -
Trevor Tingey
Workflows is getting better and better! My team recently launched 4 quality-of-life upgrades to Domo’s Workflows: 1. Notifications: You can get text or email alerts that your workflow ran or failed, instead of having to manually check in on them. 2. Task Creator Access: We gave you the ability to lock down a specific queue to only see tasks assigned to you, but now you can choose to allow users to also see tasks they created in that queue. Really useful for tracking your requests, but also not opening up the whole world to everyone. 3. Code Engine Global Library: Choosing the right function for your workflows used to feel like deciphering hieroglyphs. Now we have a library with user-friendly titles, descriptions and logos that make it clear what each function does. 4. Account Data Type: Now you can build more flexible and reusable integrations without exposing your account credentials using this feature—instead of having to hardcode a single account into a function, you can use an account variable so that each person can leverage their own credentials to talk to their third party apps. If you’re in the Domo community, these upgrades are ready and waiting for you. If you’re not and want to start building smarter workflows into your business, message me and I’ll get you connected to the right person. #domoupdates #domoproduct
58
5 Comments -
Rommel Rico
I lead an informal 15 minute pre-standup meeting with my team (which we call Coffee-n-tix). It's a great tool for being an effective leader and for bringing psychological safety to my team. Engineers hate meetings, especially ones that _seem_ redundant, but below is why I keep it to this day. Reason 1: Recognition and reward > The normal standup is usually time-constrained. They're scheduled back-to-back across teams, so people usually have to be succinct to wrap up on time. > But on the pre-standup time, we can take our time to thank people for something they did. > We can go into details. We can be specific. We can share exactly what was done, how, and why it's appreciated. > This is a great mechanism for building team morale, preventing burnout, and rewarding laudable behavior. Reason 2: Preparing for the day > The standup meeting is all about what you're working on and your current blockers. It usually doesn't allow a lot of time to be strategic about how to use your time. > In the pre-standup meeting, we look at capacity and demand. We try to anticipate issues, and develop contingency plans for what to do when those issues come up. For example, we might decide that we need a person to not be overloaded with work because they need to be ready to provide production support, or we might decide that a brainstorming or pair programming session is needed. > This lets us be prepared ahead of time, instead of getting caught with our pants down. Reason 3: Tracking metrics. > During our pre-standup meeting, we look at our current metrics (usually just staring at the Jira board). > This is a great time to have honest, frank conversations about tickets that are taking too long, were not properly thought through, or have challenges that we didn't anticipate. > Then when we join the normal standup meeting, we can present a united front to the wider audience and we have a plan to tackle whatever the issue might be. Reason 4: Problem solving. > There isn't a single day where we don't confront problems or challenges. > And surprisingly, most times the problems are incredibly dumb like someone doesn't know how to deploy or test a change. > The pre-standup meeting gives us time to talk about those issues directly and resolve them. We can choose to just talk about them, do a quick diagramming session, or maybe even do a Code with me session. > Then when we join the standup meeting, we can go over those problems super briefly but also explain how we are already a step ahead and we have a solution in mind. > Or, if we have a big, scary problem that we can't figure out before standup, we've already ruled out all the easy stuff and we can focus the energies of the larger audience on resolving this issue, thereby saving everybody time. #engineeringmanagement #softwareengineering
14
3 Comments -
Diego Molina
I find interesting the hype about counting downloads for a given tool as a metric for success. I guess it helps to show how popular something is, but I would love it if people also considered other aspects, such as project governance and project activity. Anyway, let's follow the trend (evidence in the comments): - Selenium Java downloads: 98.5M in May 2024 - Selenium active users in the last 30 days: 2.6M. Happy automation!
54
9 Comments -
Peter Gillard-Moss
From IC engineer to C-Level exec, the most effective people I've worked with have one thing in common: they break down and decouple decisions. Optionality, trade-offs, last responsible moment, one-way and two-way door decisions, slicing, MVPs, proof of concepts etc. are all techniques they use for breaking decisions down and getting them made. The best engineers did this when working on code. Rather than trying to decide on all the functionality at once they'd be able to say "I can break this story down into smaller pieces of value that we can get released to users fast" or "we don't need to deliver this piece of functionality now, we can wait until we see how user's respond". The best architects, product managers and business leaders do the same thing. It's a real skill to decouple decisions. It's a real skill to be able to defer parts of a decision to next or later (or even never) to enable faster movement and lower risk. I have always been impressed when I witness it. Often it's obvious with hindsight and sometimes its completely out-of-the-box and you would have never have realised.
60
3 Comments -
Zeeshan Muzammal
There are two types of mid-level software developers out there: First: Those teaching and sharing how to handle thousands or millions of transactions per second (100K tps). Second: Those avidly watching and reading about how to handle thousands or millions of transactions per second. Meanwhile, both are working on systems that barely handle 10 transactions per second on a good day. #ScalableSystem
13
1 Comment -
Cristina Sorice
I've always been interested in big projects, and I've spent my career in what's now called "deep tech" or "hard tech" so far. I've worked on projects varying in size and scope, but they all have one thing in common: they never work out as we expect, hope, or plan. How do big things get done? How do projects succeed without exceeding schedule and budget? I'm still learning by trial and error, but this book has provided me with lots of food for thought. If you've read it, I'd love to hear about how this applies to your work! #engineering #hardtech #deeptech #projectmanagement
38
16 Comments -
Thomas Stringer
Math is important in all of software engineering, and the reliability engineering specialty is no exception. Things like calculating error budget burn rate, long and short windows and how they affect you can get complicated quickly. Understanding the equations (and I mean *really* understanding them, not just memorizing them) is the first big step to being able to reproducibly calculate your targets and adjust them when needed. #sre #sitereliabilityengineering
18
1 Comment -
Colt McNealy
<rant> Yesterday morning I wrote a post about the phrase "scaling independently" while waiting for my KIND cluster to spin up, and I promised that I would give an example of where you really DO want things to "scale independently" in a separate post. Well, that post is here. Let's talk about #apachekafka and its new Raft-based metadata solution, known as KRaft (yes, it evokes Mac-and-Cheese, which is partially why I like it so much). KRaft is a replacement for ZooKeeper's former role in Kafka. In a KRaft-based cluster, the Metadata Quorum is a group of Kafka Servers that have the "controller" role. The Metadata Quorum stores information such as "what topics exist in my cluster?" and "which Broker is the leader for which Partition?" and "which follower replicas live on which Broker?". This information is CRITICAL for the availability and consistency of a Kafka cluster. In order to ensure consistency of the Metadata Quorum, one specific Controller is chosen as the leader. All write requests go through that leader. In order to ensure availability of the Metadata Quorum, there can be "follower" Controllers in the Metadata Quorum which also store the metadata updates (synchronous replication). In case of failure of the leader, one of the followers can become the leader. In this specific case, it DOES make sense to scale the Brokers and Controllers independently. Why? First, the Controllers don't actually scale. One can be a leader at a time, so the others are just providing backup (note: for reasons beyond the scope of this post, it's best to have an odd number). Additionally, before KIP-853 is implemented, it's actually really hard to add or remove Controllers to/from the Metadata Quorum without downtime. Secondly, it's possible for a misbehaving client to take down a Broker, for example by sending way too much data to a specific partition. If it's just a single Broker that's lost, most of the cluster will continue on and live to fight another day. However, losing a Controller is a Very Bad Thing. Thus, by separating the Controllers from the Brokers, we can improve the availability of the cluster. It is indeed possible to run Kafka with some servers that share the responsibility of Controller and Broker. This is great for development (especially local dev) and also in *SOME* highly resource-constrained production environments. However, I would suggest as a rule of thumb that you should probably: - Separate out your Controllers on their own isolated machines - Start with a Metadata Quorum of size 3, which allows losing one Controller and continuing on alive - Put your Brokers on another set of nodes. Now, there's another good example of how Kafka scales X and Y independently: Compute and Storage. Watch out for the next "Colt Rant" about this, coming later this week! </rant> PS—here's the link to the previous post: https://lnkd.in/gXU77rKW
12
6 Comments
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore MoreOthers named Chris Marchbanks in United States
-
Chris Marchbanks
Austin, TX -
Chris Marchbanks
Software Development Engineer In Testing (SDET I) at Change Healthcare
Nashville Metropolitan Area -
Chris Marchbanks
Connecting with clients & customers through relationships, technology, & creativity
Dallas-Fort Worth Metroplex -
Chris Marchbanks
Uniserv Director for USEA
Lehi, UT
19 others named Chris Marchbanks in United States are on LinkedIn
See others named Chris Marchbanks