Articles on Smashing Magazine — For Web Designers And Developers https://www.smashingmagazine.com/ Recent content in Articles on Smashing Magazine — For Web Designers And Developers Wed, 28 Jun 2023 04:35:01 GMT https://validator.w3.org/feed/docs/rss2.html manual en Articles on Smashing Magazine — For Web Designers And Developers https://www.smashingmagazine.com/images/favicon/app-icon-512x512.png https://www.smashingmagazine.com/ All rights reserved 2023, Smashing Media AG Development Design UX Mobile Front-end <![CDATA[Useful DevTools Tips and Tricks]]> https://smashingmagazine.com/2023/06/popular-devtools-tips/ https://smashingmagazine.com/2023/06/popular-devtools-tips/ Tue, 27 Jun 2023 12:00:00 GMT When it comes to browser DevTools, we all have our own preferences and personal workflows, and we pride ourselves in knowing that “one little trick” that makes our debugging lives easier.

But also — and I know this from having worked on DevTools at Mozilla and Microsoft for the past ten years — most people tend to use the same three or four DevTools features, leaving the rest unused. This is unfortunate as there are dozens of panels and hundreds of features available in DevTools across all browsers, and even the less popular ones can be quite useful when you need them.

As it turns out, I’ve maintained the DevTools Tips website for the past two years now. More and more tips get added over time, and traffic keeps growing. I recently started tracking the most popular tips that people are accessing on the site, and I thought it would be interesting to share some of this data with you!

So, here are the top 15 most popular DevTools tips from the website.

If there are other tips that you love and that make you more productive, consider sharing them with our community in the comments section!

Let’s count down, starting with…

15: Zoom DevTools

If you’re like me, you may find the text and buttons in DevTools too small to use comfortably. I know I’m not alone here, judging by the number of people who ask our team how to make them bigger!

Well, it turns out you can actually zoom into the DevTools UI.

DevTools’ user interface is built with HTML, CSS, and JavaScript, which means that it’s rendered as web content by the browser. And just like any other web content in browsers, it can be zoomed in or out by using the Ctrl+ and Ctrl- keyboard shortcuts (or Cmd+ and Cmd- on macOS).

So, if you find the text in DevTools too small to read, click anywhere in DevTools to make sure the focus is there, and then press Ctrl+ (or Cmd+ on macOS).

Chromium-based browsers such as Chrome, Edge, Brave, or Opera can also display the font used by an element that contains the text:

  • Select an element that only contains text children.
  • Open the Computed tab in the sidebar of the Elements tool.
  • Scroll down to the bottom of the tab.
  • The rendered fonts are displayed.

Note: To learn more, see “List the fonts used on a page or an element.”

12: Measure Arbitrary Distances On A Page

Sometimes it can be useful to quickly measure the size of an area on a webpage or the distance between two things. You can, of course, use DevTools to get the size of any given element. But sometimes, you need to measure an arbitrary distance that may not match any element on the page.

When this happens, one nice way is to use Firefox’s measurement tool:

  1. If you haven’t done so already, enable the tool. This only needs to be done once: Open DevTools, go into the Settings panel by pressing F1 and, in the Available Toolbox Buttons, check the Measure a portion of the page option.
  2. Now, on any page, click the new Measure a portion of the page icon in the toolbar.
  3. Click and drag with the mouse to measure distances and areas.

Note: To learn more, see “Measure arbitrary distances in the page.”

11: Detect Unused Code

One way to make a webpage appear fast to your users is to make sure it only loads the JavaScript and CSS dependencies it truly needs.

This may seem obvious, but today’s complex web apps often load huge bundles of code, even when only a small portion is needed to render the first page.

In Chromium-based browsers, you can use the Coverage tool to identify which parts of your code are unused. Here is how:

  1. Open the Coverage tool. You can use the Command Menu as a shortcut: press Ctrl+Shift+P (or Cmd+Shift+P on macOS), type “coverage” and then press Enter.)
  2. Click Start instrumenting coverage and refresh the page.
  3. Wait for the page to reload and for the coverage report to appear.
  4. Click any of the reported files to open them in the Sources tool.

The file appears in the tool along with blue and red bars that indicate whether a line of code is used or unused, respectively.

Note: To learn more, see “Detect unused CSS and JavaScript code.”

10: Change The Playback Rate Of A Video

Usually, when a video appears on a webpage, the video player that displays it also provides buttons to control its playback, including a way to speed it up or slow it down. But that’s not always the case.

In cases when the webpage makes it difficult or impossible to control a video, you can use DevTools to control it via JavaScript istead.

  1. Open DevTools.
  2. Select the <video> element in the Elements tool (called Inspector in Firefox).
  3. Open the Console tool.
  4. Type the following: $0.playbackRate = 2; and press Enter.

The $0 expression is a shortcut that refers to whatever element is currently selected in DevTools; in this case, it refers to the <video> HTML element.

By using the playbackRate property of the <video> element, you can speed up or slow down the video. Note that you could also use any of the other <video> element properties or methods, such as:

  • $0.pause() to pause the video;
  • $0.play() to resume playing the video;
  • $0.loop = true to repeat the video in a loop.

Note: To learn more, see “Speed up or slow down a video.”

9: Use DevTools In Another Language

If, like me, English isn’t your primary language, using DevTools in English might make things harder for you.

If that’s your case, know that you can actually use a translated version of DevTools that either matches your operating system, your browser, or a language of your choice.

The procedure differs per browser.

In Safari, both the browser and Web Inspector (which is what DevTools is called in Safari) inherit the language of the operating system. So if you want to use a different language for DevTools, you’ll need to set it globally by going into System preferencesLanguage & RegionApps.

In Firefox, DevTools always matches the language of the browser. So, if you want to use DevTools in, say, French, then download Firefox in French.

Finally, in Chrome or Edge, you can choose to either match the language of the browser or set a different language just for DevTools.

To make your choice:

  1. Open DevTools and press F1 to open the Settings.
  2. In the Language drop-down, choose either Browser UI language to match the browser language or choose another language from the list.

Note: To learn more, see “Use DevTools in another language.”

8: Disable Event Listeners

Event listeners can sometimes get in the way of debugging a webpage. If you’re investigating a particular issue, but every time you move your mouse or use the keyboard, unrelated event listeners are triggered, this could make it harder to focus on your task.

A simple way to disable an event listener is by selecting the element it applies to in the Elements tool (or Inspector in Firefox). Once you’ve found and selected the element, do either of the following:

  • In Firefox, click the event badge next to the element, and in the popup that appears, uncheck the listeners you want to disable.
  • In Chrome or Edge, click the Event Listeners tab in the sidebar panel, find the listener you want to remove, and click Remove.

Note: To learn more, see “Remove or disable event listeners.”

7: View Console Logs On Non-Safari Browsers On iOS

As you might know, Safari isn’t the only browser you can install and use on an iOS device. Firefox, Chrome, Edge, and others can also be used. Technically, they all run on the same underlying browser rendering engine, WebKit, so a website should more or less look the same in all of these browsers in iOS.

However, it’s possible to have bugs on other browsers that don’t replicate in Safari. This can be quite tricky to investigate. While it’s possible to debug Safari on an iOS device by attaching the device to a Mac with a USB cable, it’s impossible to debug non-Safari browsers.

Thankfully, there is a way to at least see your console logs in Chrome and Edge (and possibly other Chromium-based browsers) when using iOS:

  1. Open Chrome or Edge on your iOS device and go to the special about:inspect page.
  2. Click Start Logging.
  3. Keep this tab open and then open another one.
  4. In the new tab, go to the page you’re trying to debug.
  5. Return to the previous tab. Your console logs should now be displayed.

Note: To learn more, see “View console logs from non-Safari browsers on an iPhone.”

6: Copy Element Styles

Sometimes it’s useful to extract a single element from a webpage, maybe to test it in isolation. To do this, you’ll first need to extract the element’s HTML code via the Elements tool by right-clicking the element and choosing CopyCopy outer HTML.

Extracting the element’s styles, however, is a bit more difficult as it involves going over all of the CSS rules that apply to the element.

Chrome, Edge, and other Chromium-based browsers make this step a lot faster:

  1. In the Elements tool, select the element you want to copy styles from.
  2. Right-click the selected element.
  3. Click CopyCopy styles.
  4. Paste the result in your text editor.

You now have all the styles that apply to this element, including inherited styles and custom properties, in a single list.

Note: To learn more, see “Copy an element’s styles.”

5: Download All Images On The Page

This nice tip isn’t specific to any browser and can be run anywhere as long as you can execute JavaScript. If you want to download all of the images that are on a webpage, open the Console tool, paste the following code, and press Enter:

$$('img').forEach(async (img) => {
 try {
   const src = img.src;
   // Fetch the image as a blob.
   const fetchResponse = await fetch(src);
   const blob = await fetchResponse.blob();
   const mimeType = blob.type;
   // Figure out a name for it from the src and the mime-type.
   const start = src.lastIndexOf('/') + 1;
   const end = src.indexOf('.', start);
   let name = src.substring(start, end === -1 ? undefined : end);
   name = name.replace(/[^a-zA-Z0-9]+/g, '-');
   name += '.' + mimeType.substring(mimeType.lastIndexOf('/') + 1);
   // Download the blob using a <a> element.
   const a = document.createElement('a');
   a.setAttribute('href', URL.createObjectURL(blob));
   a.setAttribute('download', name);
   a.click();
 } catch (e) {}
});

Note that this might not always succeed: the CSP policies in place on the web page may cause some of the images to fail to download.

If you happen to use this technique often, you might want to turn this into a reusable snippet of code by pasting it into the Snippets panel, which can be found in the left sidebar of the Sources tool in Chromium-based browsers.

In Firefox, you can also press Ctrl+I on any webpage to open Page Info, then go to Media and select Save As to download all the images.

Note: To learn more, see “Download all images from the page.”

4: Visualize A Page In 3D

The HTML and CSS code we write to create webpages gets parsed, interpreted, and transformed by the browser, which turns it into various tree-like data structures like the DOM, compositing layers, or the stacking context tree.

While these data structures are mostly internal in-memory representations of a running webpage, it can sometimes be helpful to explore them and make sure things work as intended.

A three-dimensional representation of these structures can help see things in a way that other representations can’t. Plus, let’s admit it, it’s cool!

Edge is the only browser that provides a tool dedicated to visualizing webpages in 3D in a variety of ways.

  1. The easiest way to open it is by using the Command Menu. Press Ctrl+Shift+P (or Cmd+Shift+P on macOS), type “3D” and then press Enter.
  2. In the 3D View tool, choose between the three different modes: Z-Index, DOM, and Composited Layers.
  3. Use your mouse cursor to pan, rotate, or zoom the 3D scene.

The Z-Index mode can be helpful to know which elements are stacking contexts and which are positioned on the z-axis.

The DOM mode can be used to easily see how deep your DOM tree is or find elements that are outside of the viewport.

The Composited Layers mode shows all the different layers the browser rendering engine creates to paint the page as quickly as possible.

Consider that Safari and Chrome also have a Layers tool that shows composited layers.

Note: To learn more, see “See the page in 3D.”

3: Disable Abusive Debugger Statements

Some websites aren’t very nice to us web developers. While they seem normal at first, as soon as you open DevTools, they immediately get stuck and pause at a JavaScript breakpoint, making it very hard to inspect the page!

These websites achieve this by adding a debugger statement in their code. This statement has no effect as long as DevTools is closed, but as soon as you open it, DevTools pauses the website’s main thread.

If you ever find yourself in this situation, here is a way to get around it:

  1. Open the Sources tool (called Debugger in Firefox).
  2. Find the line where the debugger statement is. That shouldn’t be hard since the debugger is currently paused there, so it should be visible right away.
  3. Right-click on the line number next to this line.
  4. In the context menu, choose Never pause here.
  5. Refresh the page.

Note: To learn more, see “Disable abusive debugger statements that prevent inspecting websites.”

2: Edit And Resend Network Requests

When working on your server-side logic or API, it may be useful to send a request over and over again without having to reload the entire client-side webpage and interact with it each time. Sometimes you just need to tweak a couple of request parameters to test something.

One of the easiest ways to do this is by using Edge’s Network Console tool or Firefox’s Edit and Resend feature of the Network tool. Both of them allow you to start from an existing request, modify it, and resend it.

In Firefox:

  • Open the Network tool.
  • Right-click the network request you want to edit and then click Edit and Resend.
  • A new sidebar panel opens up, which lets you change things like the URL, the method, the request parameters, and even the body.
  • Change anything you need and click Send.

In Edge:

  • First, enable the Network Console tool by going into the Settings panel (press F1) → ExperimentsEnable Network Console.
  • Then, in the Network tool, find the request you want to edit, right-click it and then click Edit and Resend.
  • The Network Console tool appears, which lets you change the request just like in Firefox.
  • Make the changes you need, and then click Send.

Here is what the feature looks like in Firefox:

Note: To learn more, see “Edit and resend faulty network requests to debug them.”

If you need to resend a request without editing it first, you can do so too. (See: Replay a XHR request)

And the honor of being the Number One most popular DevTools tip in this roundup goes to… 🥁

1: Simulate Devices

This is, by far, the most widely viewed DevTools tip on my website. I’m not sure why exactly, but I have theories:

  • Cross-browser and cross-device testing remain, to this day, one of the most important pain points that web developers face, and it’s nice to be able to simulate other devices from the comfort of your development browser.
  • People might be using it to achieve non-dev tasks. For example, people use it to post photos on Instagram from their laptops or desktop computers!

It’s important to realize, though, that DevTools can’t simulate what your website will look like on another device. Underneath it, it is all still the same browser rendering engine. So, for example, when you simulate an iPhone by using Firefox’s Responsive Design Mode, the page still gets rendered by Firefox’s rendering engine, Gecko, rather than Safari’s rendering engine, WebKit.

Always test on actual browsers and actual devices if you don’t want your users to stumble upon bugs you could have caught.

That being said,

Simulating devices in DevTools is very useful for testing how a layout works at different screen sizes and device pixel ratios. You can even use it to simulate touch inputs and other user agent strings.

Here are the easiest ways to simulate devices per browser:

  • In Safari, press Ctrl+Cmd+R, or click Develop in the menu bar and then click Enter Responsive Design Mode.
  • In Firefox, press Ctrl+Shift+M (or Cmd+Shift+M), or use the browser menu → More toolsResponsive design mode.
  • In Chrome or Edge, open DevTools first, then press Ctrl+Shift+M (or Cmd+Shift+M), or click the Device Toolbar icon.

Here is how simulating devices looks in Safari:

Note: To learn more, see “Simulate different devices and screen sizes.”

Finally, if you find yourself simulating screen sizes often, you might be interested in using Polypane. Polypane is a great development browser that lets you simulate multiple synchronized viewports at the same time, so you can see how your website renders at different sizes at the same time.

Polypane comes with its own set of unique features, which you can also find on DevTools Tips.

Conclusion

I’m hoping you can see now that DevTools is very versatile and can be used to achieve as many tasks as your imagination allows. Whatever your debugging use case is, there’s probably a tool that’s right for the job. And if there isn’t, you may be able to find out what you need to know by running JavaScript in the Console!

If you’ve discovered cool little tips that come in handy in specific situations, please share them in the comments section, as they may be very useful to others too.

Further Reading on Smashing Magazine

]]>
hello@smashingmagazine.com (Patrick Brosset)
<![CDATA[Behind The Curtains Of Wikipedia Redesign]]> https://smashingmagazine.com/2023/06/behind-curtains-wikipedia-redesign/ https://smashingmagazine.com/2023/06/behind-curtains-wikipedia-redesign/ Mon, 26 Jun 2023 08:00:00 GMT Wikipedia is more than a website — it’s perhaps a cornerstone of the World Wide Web. For decades, the site has provided a model for collaborating online, designing long-form content layouts, and supporting internationalization.

One of the more endearing qualities of Wikipedia is its design, which is known for its utilitarian aesthetics that have stuck around since its 2001 inception. The site has undergone redesigns before, but they are rare and often introduce subtle updates.

This year, 2023, marks the first Wikipedia redesign since 2014. Alex Hollender and Jon Robson led the effort and were kind enough to discuss it with us. The following is an interview that delves into what changed in this latest design, getting into the process as well as design and development details that we all can learn from.

Interview

Geoff Graham: When I think of Wikipedia as a website, I think about the design first and foremost. It’s classic for its focus on function over aesthetics, yet often considered a relic along the same lines as Craigslist. How was it decided that “now” is the right time for a redesign?

Alex Hollender: You know, it’s funny, I think people sometimes assume that organizations make these super-calculated, methodical decisions, and maybe some do. What I’ve experienced more often are opportunistic decisions resulting from some combination of intuition and relationships. Nirzar Pangakar, the design director back in 2019, knew what the organization was hoping to accomplish in the coming years and understood that media and content on the internet were changing rapidly. He saw that we needed to set ourselves up with a better foundation to iterate on top of going forward. He also imagined how the website looked to newcomers and thought that making it a bit more familiar to them would offer a more inclusive experience. And I think he also sensed that in terms of the culture of the Wikipedia community, if we let any more time pass before making some changes, the conservativism and ossification would grow more and more intense, and projects like this would only become more difficult down the road.

So it’s not like something was severely broken, or data was pointing us towards a specific problem or opportunity. There were a few concrete things we knew could be improved, but the driving force was Nirzar’s intuition regarding some of these larger things. He had a great relationship with the Chief Product Officer, Toby Negrin, and our team’s Product Manager, Olga Vasileva, and found an opportunity to get the project started. And because it can be somewhat difficult to articulate these sorts of intuitions, Nirzar, Olga, and I made a little design sprint to help others envision and understand the types of changes we could start with and where they might lead us.

Geoff: Wikipedia is more than just a website, right? It’s more like 300 sites where each instance is a different language. How do you approach a design system for a large network of sites like that? Is there a single, centralized source of truth, or is it something looser, depending on the locale?

Alex: Right, so there’s Wikipedia in over 300 languages, then there’s also a bunch of sister projects, including WikiData, Commons, WikiQuote, WikiSource, and others — all of which use the same interface. I’d say the needs are maybe 80-ish percent the same across all of the experiences. Then you’ve got things where specific languages need special functionality, or the WikiData search bar needs something extra, or the WikiSource “article” page has different needs from the Wikipedia one.

There’s, unfortunately, no single source of truth — we don’t even have all of the customizations and variations documented. A big part of being a designer here is just building a catalog in your mind over time. Different people know about different little nooks and crannies and would remind us like, “Hey, if you want to put a button there, you’re going to have to figure out something for project X in language Y because they’ve got a custom feature living in that spot currently.” It’s this very organic, emergent kind of thing where it’s just grown to fit people’s needs in a very unstructured, decentralized way. Super cool but quite difficult when you want to tweak some of the more fundamental/foundational parts of the experience.

Jon Robson: Before I worked on Wikipedia, I’d never worked on multilingual sites. There’s such a fascinating depth to it, for example, how numbering systems differ in different languages, how quotation marks should be considered translated content, how certain projects have content in two scripts, and how some projects add their own cultural flavor to the design. If you look at the Navajo Wikipedia website, they use a Navajo rug pattern which they’ve had since at least 2005.

It was fascinating how during this redesign, every release risked disrupting something small, as it was impossible to audit everything that was happening in all those projects. We had to make peace with the fact that we might not be able to retain them all and that things would break, and we’d iterate and find a happy medium. Often it’s unclear who to talk to about these things within the organization. Some projects just notice our changes and adapt, while other communities are more vocal. So we have to work together to reconcile these extremes. I’ve been impressed with how Alex has remained so stoic as a designer despite the curve balls the project has thrown at him.

Geoff: I imagine there’s a fine balance when working on a redesign for a site that’s as ubiquitous and that has as a long legacy as Wikipedia. How important was maintaining a sense of familiarity with the design for users? And how constraining was that for introducing new design elements?

Alex: Ultimately, we were focused on delivering the best reading and editing experience we could, somewhat regardless of familiarity for experienced users. For example, moving the table of contents from being inline below the lead section to being a sidebar, from a familiarity perspective, was a huge shift, and a lot of experienced users couldn’t get past that. For them, it violated the platonic form of a Wikipedia article or something, like if the table of contents wasn’t inline, then the article wasn’t a Wikipedia article. And while they tried to justify that preference from a functionality standpoint, their reasons weren’t strong, and I think it was mostly about them being uncomfortable with the unfamiliar. Meanwhile, all of the testing and the functional justifications we, and some community members, put forth made it super clear that the sidebar was the better approach. So, that’s how we made that particular decision.

Jon: The table of contents going from within the article to outside the article also uncovered a lot of interesting innovations our community had made for certain articles. For example, in some articles, they’d converted the standard table of contents to a horizontal layout using some inline styles or only listed the top-level headings using display: none in CSS to hide the rest. These customizations were broken when we implemented our redesign, which has opened up interesting discussions about whether customizations should be core parts of the software and how they should work in the new design.

Alex: I think the question of familiarity came into play more in terms of the rollout and how much we could change at once. We were sensitive to the risk of upsetting this very small part of the community that has an outsized influence on our decisions. Our fear was they would try to shut the project down, which has happened with other projects, big and small, in the past. So, for example, we didn’t include an increased font size in the first version of the new interface, even though we (and many community members) strongly believed it would be a significant improvement. We know from past projects that typography is a particularly hot-button topic.

Geoff: Who else was involved in the redesign? What roles did they play, and how did you manage all the work?

Alex: As far as our team goes, it’s about 5-6 Engineers, a Product Manager, a Community Specialist, and someone on Quality Assurance. Pretty much everyone was involved in a meaningful way in terms of exploring design challenges and weighing in on various options. Olga, the Product Manager, and several of the Engineers are better than I am when it comes to thinking about certain challenges. One clear example is accessibility.

There were several community members who were close collaborators and hundreds of others who were more casually involved. The majority of that collaboration happens on Phabricator, which is our task-tracking system. Of course, the timing gets tricky because community members might jump in with ideas or concerns as we’re finishing up a feature, maybe just because they weren’t aware that the conversation had started a few months back or whatever.

And then there’s the Wikimedia Foundation (WMF) design team. Each member of the design team has their own product team they belong to, so involvement, for the most part, happens via design reviews. There was a bunch of overlap, particularly between the work we were doing and the stuff the editing team worked on, so I got to collaborate closely with that designer. Also, each designer is assigned a design mentor. So, Rita, who is my design mentor — and who also happens to be an incredible designer and person — was behind the scenes all along, helping me figure everything out.

To me, the whole process felt pretty inclusive. A lot of the time, it felt like the process and the conversations were guiding things more than any one individual, which is both cool and a little scary.

Geoff: Wikipedia has been used to study online text legibility (PDF) because of its heavy focus on content. Yet, there have been so many advances in web fonts and typography since the last significant Wikipedia redesign in 2004, from variable font formats and fluid typography to even newer stuff in CSS from this past year, like the super new text-wrap: balance and a new line height (lh) unit. What design considerations went into the text in the latest redesign?

Alex: As far as I understand, there was a typography refresh back in 2014, which succeeded in some ways but was also super contentious. In terms of design ownership, there’s an unwritten understanding that the volunteer community owns the content, and WMF owns the interface. And while the typography is clearly a fundamental part of the overall user experience of the site, it’s definitely on the content side of the content-interface divide, which makes it more difficult for us to work on.

Prior to this project, a lot of great work had already been done by the Design Systems Team regarding the font stack (which is critical, given all of the different language editions of Wikipedia), how the type sizing is declared (which has a big impact on the experience if you manually change the font size), and other things like that.

For this project, from a sort of 80/20 perspective, I think 80% of the room for improvement was managing the line length by adding a max-width, and increasing the base font-size value (which is hopefully coming soon). We did spend a bunch of time looking into other refinements that are forthcoming.

Jon: I actually worked on that typography refresh early in my career at the Wikimedia Foundation. It was contentious for two reasons. First, we added a limited container width for the content and used Helvetica Neue for the font. The latter was a problem due to the “open source” nature of the project, which the community felt strongly about. We compromised by preferring an open font when available, which I think was Linux Libertine at the time.

That project was a lot shorter in terms of time, and we had more important problems to solve, such as having a functioning mobile site and a WYSIWYG editor. So, no compromise could be found on the limited width front. But I was glad we finally got that in with this redesign, even if it came eight years later. Free knowledge is more a marathon than a sprint.

Alex: I do think it’s ironic that Wikipedia, one of the most popular text-based websites on the internet, doesn’t necessarily have a super strong typography practice, at least from a design perspective. Maybe a lot of that has to do with how varied the content is, how many different templates we have, and all of the different languages we need to support. Maybe it would have to almost be a language-by-language endeavor if we were ever to pull it off. I’m not sure.

Editor’s Note: The main discussion and prototype for the project’s typography efforts are available to view.

Geoff: Speaking of the differences in web design since 2004, the term “responsive web design” was also coined in that span of time. Wikipedia has no doubt had a mobile presence for some time, but were there any new challenges to make the site more responsive, given how best practices have evolved?

Alex: We set a soft goal of delivering a great experience down to a 500px browser width. I think it’s fairly uncommon for people to be using desktop or laptop devices with browsers that narrow. But these days, it’s pretty easy to achieve a fully-responsive site with CSS alone, so there didn’t seem to be much of a tradeoff there. Plus, we heard from a few editors that they often tile two or three browser windows side-by-side, so it can get narrow in those cases. The updated interface does feature three menus that can be pinned open as sidebars or collapsed as dropdowns, which is a configuration mainly for logged-in users in order to give them more control over their workstations. And the state of those menus is managed by JavaScript, which presented a slight challenge. Jon wrote a great article a few years ago about why we still have separate mobile and desktop sites.

I think another aspect of making things work well down to 500px was that we wanted to push ourselves to see how close we might be able to get to have one site for all devices, though we’re not quite there yet.

Jon: If I remember correctly, Alex and I had a good back-and-forth about that 500px threshold. In theory, we could have supported a breakpoint below that, and Alex had the mockups ready, but I was concerned that it would slow down development. Plus, the use case was not there as most of our users were resizing browsers, and we could back that up with data.

In fact, during the redesign, vocal members of our community pushed us to introduce an explicit viewport size in our markup because they were annoyed that the table of contents component was collapsing inconsistently in browsers. If you view the source, you’ll now see <meta name="viewport" content="width=1000">.

Note: You can even read the entire discussion about the change.

Geoff: I know front-end nerds will want to know how CSS is written and managed in this latest design because, well, I’m one of them! What does the process look like to make an edit to the styles?

Jon: You have to remember that Wikipedia — and the MediaWiki software that provides it — is quite old and very large, and some of our technology stack reflects that.

MediaWiki is primarily a progressively enhanced web page written in PHP, so we tend to ship HTML with vanilla JavaScript and CSS that enhances it. Our front end is really unusual in that we have no build scripts for our JavaScript and CSS. We write ES6 code without transpiling it, and we use LESS compiled at runtime in PHP, with heavy caching, for our CSS. HTML is provided by Mustache templates.

We are very conservative about what libraries and technologies we use, particularly if they are likely to have an impact on others in the stack. We use TypeScript in the project to validate our code using JSDoc blocks but do not write our code in TypeScript as many of our volunteers do not know the language, and we don’t want to alienate them.

There was talk about replacing LESS with a different CSS preprocessor, but we decided to retain the status quo we’ve used since 2013 because we don’t want to fragment our codebase. We currently use Mustache templates because that’s what we’ve used since 2014, but we hope to eventually phase those out and replace them with Vue.js templates.

All our code is open-sourced, which is pretty unusual and cool! So, if you ever see some visual thing that looks off or could be improved, we’re always happy to take PRs with CSS that fix it.

Geoff: Another nerdy but key question for you: how important were performance considerations to the redesign? What specific things do you look for in Wikipedia’s performance, and what tools do you use to measure them?

Jon: Performance is really important to us, as Wikipedia is global, and we have many projects growing in areas with slower internet connections. We have a performance dashboard that we monitor where we collect global data from our users using the NavigationTiming API. And we run automated synthetic performance tests using Sitespeed.io. This is all public, and anyone can dig into the data!

One of the biggest concerns for this redesign project was how replacing the internal search feature might lose users if it became too slow or unresponsive. We added instrumentation specifically designed to monitor this, and there’s a detailed write-up on how we analyzed the findings with synthetic performance tests.

Besides thinking about performance for specific features, we monitor bundle sizes of our render-blocking CSS assets, and our CI pipeline blocks anything that goes over our performance budget. We also run spikes to see if there are additional ways to improve performance. For example, in a quiet period, we ran a spike, which made our mobile site 300ms faster.

Given that we have hundreds of volunteers and staff collaborating on the codebase,

It’s a challenge to uphold our own high-performance standards. We’re currently working on implementing a performance budget across all our projects to formally enforce this and share the knowledge more widely for everyone to reference.

Geoff: Alex, you’ve noted that one of the goals you defined for the project was to “develop a more flexible interface with an eye towards future features.” What makes the new interface more flexible compared to how it was before, and what future features are you anticipating?

Alex: A small example of a new feature is the sticky header, which is currently only available when you are logged into the site. We built it knowing that for different types of pages, like article pages versus discussion pages versus help pages, et cetera, we would want to put different types of tools in the sticky header. That forethought can save a lot of time and complexity in terms of development.

Another aspect of flexibility, or maybe more specifically, extensibility, is information architecture. The previous interface had two different places for page tools: in the sidebar menu on the left and then above the article title. So, whenever we worked on a new page tools feature, we had to decide where it would go. Creating a clearer and more structured information architecture for the site means there’s one place for page tools, one for global navigation, and so on. I think this will make it easier for us to design new features in the future.

In terms of future features, we’re thinking about reading settings: dark mode, the ability to increase and decrease the font size and line height more easily, and maybe even themes like the Wikipedia apps have. We’re also thinking about ways to help people discover more knowledge related to what they are reading. Other things we might consider are reading features, like the ability to take notes and create collections of articles.

Geoff: Thanks so much to you both for spending some time to share your work with us! Is there anything especially interesting about the design or the work it took to make it that might not be immediately obvious but that you are proud of?

Alex: I think it’s cool to think about super small things that have a big impact. Links are a critical part of the reading experience, and following from that, knowing which links you’ve already visited is important. Previously, there was so little contrast between visited links and black text that this whole sort of navigational wayfinding benefit was missing from experience. Changing the color of visited links was about as simple as a change can be from a technical perspective, with an outsized impact on the user experience.

Another thing I’m interested in and excited about is prototyping, specifically how additional fidelity in prototypes affects the design process. I reached a point where I was predominantly making prototypes with HTML, CSS, and JavaScript to work through design challenges rather than relying on mockups. It’s maybe impossible to know what impact that had in terms of the ability for us to have discussions about the designs, evaluate them, and include community members across many languages, among other things. There’s no way for us to know how the project would have turned out or how much longer it would have taken us to arrive at certain decisions if I hadn’t taken that approach, but my inclination is that it was super helpful.

Jon: The thing I’m most excited about is that the redesign project gave us the time to really pull apart a system that was 21 years old and build the foundation for something more sustainable. Fundamental things like introducing design tokens across the entire software stack are going to be powerful tools that we can use to support user customizations that allow people to change font size and enable a dark mode, the latter of which has been a popular request. So hopefully, we can finally deliver that.

]]>
hello@smashingmagazine.com (Geoff Graham)
<![CDATA[Penpot’s Flex Layout: Building CSS Layouts In A Design Tool]]> https://smashingmagazine.com/2023/06/penpot-flex-layout-building-css-layouts-design-tool/ https://smashingmagazine.com/2023/06/penpot-flex-layout-building-css-layouts-design-tool/ Fri, 23 Jun 2023 13:00:00 GMT This article is a sponsored by Penpot

Among design tools, Penpot holds a special place. It is an open-source design tool, meant for designers and developers to work together and help them speak the same language. It’s also the first design tool out there to be fully open-source and based on open web standards.

That’s a perfect choice for designers and developers working closely together as Penpot’s approach can help to radically improve design to development processes and effortlessly make them seamless and faster.

As open-source software, Penpot also evolves blazingly fast, fueled by the support of the community. When I was first writing about Penpot a few months ago, I shared my excitement about the app’s layout features that finally bring parity between design and code and follow the same rules as CSS does. Since then, the team behind Penpot has made creating layouts even better, so they deserve another look. I really enjoyed playing with the new Penpot’s features, and I believe you might want to give them a try too.

Designing Layouts Done Right

If you ever wrote or read CSS code, there are high chances you have already stumbled upon Flexbox. It’s a cornerstone of building layouts for the modern web, and quite likely, every single website you visit on an everyday basis uses it.

Flexbox is the bread and butter of creating simple, flexible layouts. It’s the most common way of positioning elements: stacking them in rows and columns and deciding how they are supposed to be aligned and distributed.

Therefore, creating Flexbox layouts is a vital part of most web hand-off processes. And not rarely time-consuming and causing friction between design and development. Usually, developers try to translate static mockups into code by rebuilding layouts made by designers from scratch. As most designers don’t write CSS code and most design tools follow a different logic than CSS does, lots can go wrong or get lost in translation.

This is where Penpot’s Flex Layout comes into play. Layouts built-in Penpot don’t need tedious translating into code. Even though designers can build them using a familiar visual interface, they come as production-ready code out-of-the-box. And even if they need tweaking, they can still save developers plenty of time and guesswork as they follow a logic that is already familiar and understandable to them.

So at the bottom line, it benefits everyone. It’s less work for developers as they get the code they need straight away. It’s better for designers as they have finer control over the final effect and a better understanding of the technologies they are designing for. And finally, it’s good for business as it saves everyone’s time.

All of that without making the designer's job an inch harder or forcing them to write a single line of code. Now, let’s take a look at what building designs with Flex Layout look like in practice!

Getting Started With Flex Layout

As mentioned before, Flexbox can be understood as a toolkit for building layout and positioning elements.

Each Flex Layout is generally an array, a list of elements. This list can be sorted from left to write, right to left, top to bottom, or bottom to top.

Flex Layout allows you to control how elements in these lists are aligned against each other.

You can also control how elements are laid out within containers.

Flex layouts can wrap into multiple lines too. You can also nest them indefinitely to create as complex layouts as you wish.

And that’s just the beginning. There are many more options to explore. As you can see, Flex layout gives you much more possibilities and precision than most design tools do. Creating with it is not only a better process but a more powerful one.

To explore all the possible features of Flex Layout, Penpot’s team created a comprehensive Playground template for you to try. If you don’t have a Penpot account yet, go ahead and create one now. Then, duplicate the file and try to play with it yourself! The file will take you on a journey through each and every Flex layout feature, with clear examples and definitions, so you can start building complex, robust layouts in no time.

Building An Example Together

To give you an even better understanding of what working with Flex Layout is in practice, let’s look at a practical example. In the next few steps, we will dig into the structure of this little mockup and rebuild each and every part of it with Flex Layout.

For the first elements, we can use Flex Layout for our buttons. With a few clicks, we can make sure they are responsive to the size of the icon and the label inside, and set paddings and distances between the children elements.

We can also use Flex Layout for the avatars stack. To make the images overlap, a negative gap between the elements does the trick. We also have full control over the order of elements. We can lay out the stack in any direction. We can also control the stack order of each element individually. That’s thanks to Penpot’s support for z-index, another useful CSS property.

Flex layouts can be nested, creating more complex layouts and dependencies. In this case, we’ll create a separate Flex Layout for the navbar and another for the tiles grid below.

Remember that elements in Flex layouts can be wrapped? Let’s see this in action. In this case, we can create a flexible multi-dimensional layout of elements that’s responsive to the parent container and fill it with blocks both vertically and horizontally, just as CSS would do.

But what if some of the elements don’t belong to the grid? Alongside Flexbox, Penpot provides support for absolute positioning. This means that any element can be picked up from the Flex Layout to still leave in the same container but ignore the layout rules. That’s exactly what we need for the ‘Edit’ button.

Eventually, we can transform the whole board into a Flex Layout. Now, we have a design that is not only easy to work with and edit but is also fully flexible. Wondering how your design would perform on a smaller or bigger screen? All you have to do is to resize the board.

Next Steps

If you’d like to take a look at the source file of the layout we’ve just built, go ahead and duplicate this file.

To dig deeper and learn more about how to use Flex Layout, don’t forget to try the Flex Layout template.

In case you get stuck or have some questions, Penpot Community would be the best place to look for help.

There is also a great video tutorial that explains how designers and developers can work together using Flex Layout.

Summary

As you can see, with Flex Layout, the possibilities for structuring your designs are endless. I believe that features like this are a welcome change in the design tools scene and a shift in the right direction. Helping designers to take more control over their work and helping developers to work as efficiently as possible.

Coming Soon: Support For CSS Grid

Maybe you’re now thinking the same as I am: CSS layouts are not only Flexbox, are they? If you work with CSS, you probably know that Flexbox alone is not enough. More complex layouts are often better built using CSS Grid. Flexbox and Grid work best when combined together — combined to create precise yet complex and fully responsive websites.

Penpot doesn’t support CSS Grid just yet, but that is about to change! You can learn more about it at the upcoming Penpot Fest. During the event, Penpot’s team will share their plan and a demo of the upcoming Grid Layout feature. Don’t hesitate to join (virtually or in person), if you’d like to learn more about the next steps for Penpot.

]]>
hello@smashingmagazine.com (Mikołaj Dobrucki)
<![CDATA[Using AI To Detect Sentiment In Audio Files]]> https://smashingmagazine.com/2023/06/ai-detect-sentiment-audio-files/ https://smashingmagazine.com/2023/06/ai-detect-sentiment-audio-files/ Thu, 22 Jun 2023 08:00:00 GMT I don’t know if you’ve ever used Grammarly’s service for writing and editing content. But if you have, then you no doubt have seen the feature that detects the tone of your writing.

It’s an extremely helpful tool! It can be hard to know how something you write might be perceived by others, and this can help affirm or correct you. Sure, it’s some algorithm doing the work, and we know that not all AI-driven stuff is perfectly accurate. But as a gut check, it’s really useful.

Now imagine being able to do the same thing with audio files. How neat would it be to understand the underlying sentiments captured in audio recordings? Podcasters especially could stand to benefit from a tool like that, not to mention customer service teams and many other fields.

An audio sentiment analysis has the potential to transform the way we interact with data.

That’s what we are going to accomplish in this article.

The idea is fairly straightforward:

  • Upload an audio file.
  • Convert the content from speech to text.
  • Generate a score that indicates the type of sentiment it communicates.

But how do we actually build an interface that does all that? I’m going to introduce you to three tools and show how they work together to create an audio sentiment analyzer.

But First: Why Audio Sentiment Analysis?

By harnessing the capabilities of an audio sentiment analysis tool, developers and data professionals can uncover valuable insights from audio recordings, revolutionizing the way we interpret emotions and sentiments in the digital age. Customer service, for example, is crucial for businesses aiming to deliver personable experiences. We can surpass the limitations of text-based analysis to get a better idea of the feelings communicated by verbal exchanges in a variety of settings, including:

  • Call centers
    Call center agents can gain real-time insights into customer sentiment, enabling them to provide personalized and empathetic support.
  • Voice assistants
    Companies can improve their natural language processing algorithms to deliver more accurate responses to customer questions.
  • Surveys
    Organizations can gain valuable insights and understand customer satisfaction levels, identify areas of improvement, and make data-driven decisions to enhance overall customer experience.

And that is just the tip of the iceberg for one industry. Audio sentiment analysis offers valuable insights across various industries. Consider healthcare as another example. Audio analysis could enhance patient care and improve doctor-patient interactions. Healthcare providers can gain a deeper understanding of patient feedback, identify areas for improvement, and optimize the overall patient experience.

Market research is another area that could benefit from audio analysis. Researchers can leverage sentiments to gain valuable insights into a target audience’s reactions that could be used in everything from competitor analyses to brand refreshes with the use of audio speech data from interviews, focus groups, or even social media interactions where audio is used.

I can also see audio analysis being used in the design process. Like, instead of asking stakeholders to write responses, how about asking them to record their verbal reactions and running those through an audio analysis tool? The possibilities are endless!

The Technical Foundations Of Audio Sentiment Analysis

Let’s explore the technical foundations that underpin audio sentiment analysis. We will delve into machine learning for natural language processing (NLP) tasks and look into Streamlit as a web application framework. These essential components lay the groundwork for the audio analyzer we’re making.

Natural Language Processing

In our project, we leverage the Hugging Face Transformers library, a crucial component of our development toolkit. Developed by Hugging Face, the Transformers library equips developers with a vast collection of pre-trained models and advanced techniques, enabling them to extract valuable insights from audio data.

With Transformers, we can supply our audio analyzer with the ability to classify text, recognize named entities, answer questions, summarize text, translate, and generate text. Most notably, it also provides speech recognition and audio classification capabilities. Basically, we get an API that taps into pre-trained models so that our AI tool has a starting point rather than us having to train it ourselves.

UI Framework And Deployments

Streamlit is a web framework that simplifies the process of building interactive data applications. What I like about it is that it provides a set of predefined components that works well in the command line with the rest of the tools we’re using for the audio analyzer, not to mention we can deploy directly to their service to preview our work. It’s not required, as there may be other frameworks you are more familiar with.

Building The App

Now that we’ve established the two core components of our technical foundation, we will next explore implementation, such as

  1. Setting up the development environment,
  2. Performing sentiment analysis,
  3. Integrating speech recognition,
  4. Building the user interface, and
  5. Deploying the app.

Initial Setup

We begin by importing the libraries we need:

import os
import traceback
import streamlit as st
import speech_recognition as sr
from transformers import pipeline

We import os for system operations, traceback for error handling, streamlit (st) as our UI framework and for deployments, speech_recognition (sr) for audio transcription, and pipeline from Transformers to perform sentiment analysis using pre-trained models.

The project folder can be a pretty simple single directory with the following files:

  • app.py: The main script file for the Streamlit application.
  • requirements.txt: File specifying project dependencies.
  • README.md: Documentation file providing an overview of the project.

Creating The User Interface

Next, we set up the layout, courtesy of Streamlit’s framework. We can create a spacious UI by calling a wide layout:

st.set_page_config(layout="wide")

This ensures that the user interface provides ample space for displaying results and interacting with the tool.

Now let’s add some elements to the page using Streamlit’s functions. We can add a title and write some text:

// app.py
st.title("🎧 Audio Analysis 📝")
st.write("[Joas](https://huggingface.co/Pontonkid)")

I’d like to add a sidebar to the layout that can hold a description of the app as well as the form control for uploading an audio file. We’ll use the main area of the layout to display the audio transcription and sentiment score.

Here’s how we add a sidebar with Streamlit:

// app.py
st.sidebar.title("Audio Analysis")
st.sidebar.write("The Audio Analysis app is a powerful tool that allows you to analyze audio files and gain valuable insights from them. It combines speech recognition and sentiment analysis techniques to transcribe the audio and determine the sentiment expressed within it.")

And here’s how we add the form control for uploading an audio file:

// app.py
st.sidebar.header("Upload Audio")
audio_file = st.sidebar.file_uploader("Browse", type=["wav"])
upload_button = st.sidebar.button("Upload")

Notice that I’ve set up the file_uploader() so it only accepts WAV audio files. That’s just a preference, and you can specify the exact types of files you want to support. Also, notice how I added an Upload button to initiate the upload process.

Analyzing Audio Files

Here’s the fun part, where we get to extract text from an audio file, analyze it, and calculate a score that measures the sentiment level of what is said in the audio.

The plan is the following:

  1. Configure the tool to utilize a pre-trained NLP model fetched from the Hugging Face models hub.
  2. Integrate Transformers’ pipeline to perform sentiment analysis on the transcribed text.
  3. Print the transcribed text.
  4. Return a score based on the analysis of the text.

In the first step, we configure the tool to leverage a pre-trained model:

// app.py
def perform_sentiment_analysis(text):
  model_name = "distilbert-base-uncased-finetuned-sst-2-english"

This points to a model in the hub called DistilBERT. I like it because it’s focused on text classification and is pretty lightweight compared to some other models, making it ideal for a tutorial like this. But there are plenty of other models available in Transformers out there to consider.

Now we integrate the pipeline() function that does the sentiment analysis:

// app.py
def perform_sentiment_analysis(text):
  model_name = "distilbert-base-uncased-finetuned-sst-2-english"
  sentiment_analysis = pipeline("sentiment-analysis", model=model_name)

We’ve set that up to perform a sentiment analysis based on the DistilBERT model we’re using.

Next up, define a variable for the text that we get back from the analysis:

// app.py
def perform_sentiment_analysis(text):
  model_name = "distilbert-base-uncased-finetuned-sst-2-english"
  sentiment_analysis = pipeline("sentiment-analysis", model=model_name)
  results = sentiment_analysis(text)

From there, we’ll assign variables for the score label and the score itself before returning it for use:

// app.py
def perform_sentiment_analysis(text):
  model_name = "distilbert-base-uncased-finetuned-sst-2-english"
  sentiment_analysis = pipeline("sentiment-analysis", model=model_name)
  results = sentiment_analysis(text)
  sentiment_label = results[0]['label']
  sentiment_score = results[0]['score']
  return sentiment_label, sentiment_score

That’s our complete perform_sentiment_analysis() function!

Transcribing Audio Files

Next, we’re going to transcribe the content in the audio file into plain text. We’ll do that by defining a transcribe_audio() function that uses the speech_recognition library to transcribe the uploaded audio file:

// app.py
def transcribe_audio(audio_file):
  r = sr.Recognizer()
  with sr.AudioFile(audio_file) as source:
    audio = r.record(source)
    transcribed_text = r.recognize_google(audio)
  return transcribed_text

We initialize a recognizer object (r) from the speech_recognition library and open the uploaded audio file using the AudioFile function. We then record the audio using r.record(source). Finally, we use the Google Speech Recognition API through r.recognize_google(audio) to transcribe the audio and obtain the transcribed text.

In a main() function, we first check if an audio file is uploaded and the upload button is clicked. If both conditions are met, we proceed with audio transcription and sentiment analysis.

// app.py
def main():
  if audio_file and upload_button:
    try:
      transcribed_text = transcribe_audio(audio_file)
      sentiment_label, sentiment_score = perform_sentiment_analysis(transcribed_text)

Integrating Data With The UI

We have everything we need to display a sentiment analysis for an audio file in our app’s interface. We have the file uploader, a language model to train the app, a function for transcribing the audio into text, and a way to return a score. All we need to do now is hook it up to the app!

What I’m going to do is set up two headers and a text area from Streamlit, as well as variables for icons that represent the sentiment score results:

// app.py
st.header("Transcribed Text")
st.text_area("Transcribed Text", transcribed_text, height=200)
st.header("Sentiment Analysis")
negative_icon = "👎"
neutral_icon = "😐"
positive_icon = "👍"

Let’s use conditional statements to display the sentiment score based on which label corresponds to the returned result. If a sentiment label is empty, we use st.empty() to leave the section blank.

// app.py
if sentiment_label == "NEGATIVE":
  st.write(f"{negative_icon} Negative (Score: {sentiment_score})", unsafe_allow_html=True)
else:
  st.empty()

if sentiment_label == "NEUTRAL":
  st.write(f"{neutral_icon} Neutral (Score: {sentiment_score})", unsafe_allow_html=True)
else:
  st.empty()

if sentiment_label == "POSITIVE":
  st.write(f"{positive_icon} Positive (Score: {sentiment_score})", unsafe_allow_html=True)
else:
  st.empty()

Streamlit has a handy st.info() element for displaying informational messages and statuses. Let’s tap into that to display an explanation of the sentiment score results:

// app.py
st.info(
  "The sentiment score measures how strongly positive, negative, or neutral the feelings or opinions are."
  "A higher score indicates a positive sentiment, while a lower score indicates a negative sentiment."
)

We should account for error handling, right? If any exceptions occur during the audio transcription and sentiment analysis processes, they are caught in an except block. We display an error message using Streamlit’s st.error() function to inform users about the issue, and we also print the exception traceback using traceback.print_exc():

// app.py
except Exception as ex:
  st.error("Error occurred during audio transcription and sentiment analysis.")
  st.error(str(ex))
  traceback.print_exc()

This code block ensures that the app’s main() function is executed when the script is run as the main program:

// app.py
if __name__ == "__main__": main()

It’s common practice to wrap the execution of the main logic within this condition to prevent it from being executed when the script is imported as a module.

Deployments And Hosting

Now that we have successfully built our audio sentiment analysis tool, it’s time to deploy it and publish it live. For convenience, I am using the Streamlit Community Cloud for deployments since I’m already using Streamlit as a UI framework. That said, I do think it is a fantastic platform because it’s free and allows you to share your apps pretty easily.

But before we proceed, there are a few prerequisites:

  • GitHub account
    If you don’t already have one, create a GitHub account. GitHub will serve as our code repository that connects to the Streamlit Community Cloud. This is where Streamlit gets the app files to serve.
  • Streamlit Community Cloud account
    Sign up for a Streamlit Cloud so you can deploy to the cloud.

Once you have your accounts set up, it’s time to dive into the deployment process:

  1. Create a GitHub repository.
    Create a new repository on GitHub. This repository will serve as a central hub for managing and collaborating on the codebase.
  2. Create the Streamlit application.
    Log into Streamlit Community Cloud and create a new application project, providing details like the name and pointing the app to the GitHub repository with the app files.
  3. Configure deployment settings.
    Customize the deployment environment by specifying a Python version and defining environment variables.

That’s it! From here, Streamlit will automatically build and deploy our application when new changes are pushed to the main branch of the GitHub repository. You can see a working example of the audio analyzer I created: Live Demo.

Conclusion

There you have it! You have successfully built and deployed an app that recognizes speech in audio files, transcribes that speech into text, analyzes the text, and assigns a score that indicates whether the overall sentiment of the speech is positive or negative.

We used a tech stack that only consists of a language model (Transformers) and a UI framework (Streamlit) that has integrated deployment and hosting capabilities. That’s really all we needed to pull everything together!

So, what’s next? Imagine capturing sentiments in real time. That could open up new avenues for instant insights and dynamic applications. It’s an exciting opportunity to push the boundaries and take this audio sentiment analysis experiment to the next level.

Further Reading on Smashing Magazine

]]>
hello@smashingmagazine.com (Joas Pambou)
<![CDATA[Visual Editing Comes To The Headless CMS]]> https://smashingmagazine.com/2023/06/visual-editing-headless-cms/ https://smashingmagazine.com/2023/06/visual-editing-headless-cms/ Tue, 20 Jun 2023 09:00:00 GMT A couple of years ago, my friend Maria asked me to build a website for her architecture firm. For projects like this, I would normally use a headless content management system (CMS) and build a custom front end, but this time I advised her to use a site builder like Squarespace or Wix.

Why a site builder? Because Maria is a highly visual and creative person and I knew she would want everything to look just right. She needed the visual feedback loop of a site builder and Squarespace and Wix are two of the most substantial offerings in the visual editing space.

In my experience, content creators like Maria are much more productive when they can see their edits reflected on their site in a live preview. The problem is that visual editing has traditionally been supported only by site-builders, and they are often of the “low” or “no” code varieties. Visual editing just hasn’t been the sort of thing you see on a more modern stack, like a headless CMS.

Fortunately, this visual editing experience is starting to make its way to headless CMSs! And that’s what I want to do in this brief article: introduce you to headless CMSs that currently offer visual editing features.

But first…

What Is Visual Editing, Again?

Visual editing has been around since the early days of the web. Anyone who has used Dreamweaver in the past probably experienced an early version of visual editing.

Visual editing is when you can see a live preview of your site while you’re editing content. It gives the content creator an instantaneous visual feedback loop and shows their changes in the context of their site.

There are two defining features of visual editing:

  • A live preview so content creators can see their changes reflected in the context of their site.
  • Clickable page elements in the preview so content creators can easily navigate to the right form fields.

Visual editing has been standard among no-code and low-code site-builders like Squarespace, Wix, and Webflow. But those tools are not typically used by developers who want control over their tech stack. Fortunately, now we’re seeing visual editing come to headless CMSs.

Visual Editing In A Headless CMS

A headless CMS treats your content more like a database that's decoupled from the rendering of your site.

Until recently, headless CMSs came with a big tradeoff: content creators are disconnected from the front end, making it difficult to preview their site. They can't see updates as they make them.

A typical headless CMS interface just provides form fields for editing content. This lacks the context of what content looks like on the page. This UX can feel archaic to people who are familiar with real-time editing experiences in tools like Google Docs, Wix, Webflow, or Notion.

Fortunately, a new wave of headless CMSs is offering visual editing in a way that makes sense to developers. This is great news for anyone who wants to empower their team with an editing experience similar to Wix or Squarespace but on top of their own open-source stack.

Let’s compare the CMS editing experience with and without visual editing on the homepage of Roev.com.

You can see that the instant feedback from the live preview combined with the ability to click elements on the page makes the visual editing experience much more intuitive. The improvements are even more dramatic when content is nested deep inside sections on the page, making it hard to locate without clicking on the page elements.

Headless CMSs That Support Visual Editing

Many popular headless CMS offerings currently support visual editing. Let’s look at a few of the more popular options.

Tina

TinaCMS was built from the ground up for visual editing but also offers a “basic editing” mode that’s similar to traditional CMSs. Tina has an open-source admin interface and headless content API that stays synced with files in your Git repository (such as Markdown and JSON).

Storyblok

Storyblok is a headless CMS that was an early pioneer in visual editing. Storyblok stores your content in its database and makes it available via REST and GraphQL APIs.

Sanity.io (via their iframe plugin)

Sanity is a traditional headless CMS with an open-source admin interface. It supports visual editing through the use of its Iframe Pane plugin. Sanity stores your content in its database and makes it available via API.

Builder.io

Builder.io is a closed-source, visual-editing-first headless CMS that stores content in Builder.io’s database and makes it available via API.

Stackbit

Stackbit is a closed-source editing interface that’s designed to be complementary to other headless CMSs. With Stackbit, you can use another headless CMS to store your content and visually edit that content with Stackbit.

Vercel

Although it’s not a CMS, Vercel’s Deploy Previews can show an edit button in the toolbar. This edit button overlays a UI that helps content creators quickly navigate to the correct location in the CMS.

Conclusion

Now that developers are adding visual editing to their sites, I’m seeing content creators like Maria become super productive on a developer-first stack. Teams that were slow to update content before switching to visual editing are now more active and efficient.

There are many great options to build visual editing experiences without compromising developer-control and extensibility. The promise of Dreamweaver is finally here!

]]>
hello@smashingmagazine.com (Scott Gallant)
<![CDATA[Gatsby Headaches And How To Cure Them: i18n (Part 2)]]> https://smashingmagazine.com/2023/06/gatsby-headaches-i18n-part-2/ https://smashingmagazine.com/2023/06/gatsby-headaches-i18n-part-2/ Mon, 19 Jun 2023 17:00:00 GMT In Part 1 of this series, we peeked at how to add i18n to a Gatsby blog using a motley set of Gatsby plugins. They are great if you know what they can do, how to use them, and how they work. Still, plugins don’t always work great together since they are often written by different developers, which can introduce compatibility issues and cause an even bigger headache. Besides, we usually use plugins for more than i18n since we also want to add features like responsive images, Markdown support, themes, CMSs, and so on, which can lead to a whole compatibility nightmare if they aren’t properly supported.

How can we solve this? Well, when working with an incompatible, or even an old, plugin, the best solution often involves finding another plugin, hopefully one that provides better support for what is needed. Otherwise, you could find yourself editing the plugin’s original code to make it work (an indicator that you are in a bad place because it can introduce breaking changes), and unless you want to collaborate on the plugin’s codebase with the developers who wrote it, it likely won’t be a permanent solution.

But there is another option!

Table of Contents

Note: Here is the Live Demo.

The Solution: Make Your Own Plugin!

Sure, that might sound intimidating, but adding i18n from scratch to your blog is not so bad once you get down to it. Plus, you gain complete control over compatibility and how it is implemented. That’s exactly what we are going to do in this article, specifically by adding i18n to the starter site — a cooking blog — that we created together in Part 1.

The Starter

You can go ahead and see how we made our cooking blog starter in Part 1 or get it from GitHub.

This starter includes a homepage, blog post pages created from Markdown files, and blog posts authored in English and Spanish.

What we will do is add the following things to the site:

  • Localized routes for the home and blog posts,
  • A locale selector,
  • Translations,
  • Date formatting.

Let’s go through each one together.

Create Localized Routes

First, we will need to create a localized route for each locale, i.e., route our English pages to paths with a /en/ prefix and the Spanish pages to a path with a /es/ prefix. So, for example, a path like my-site.com/recipes/mac-and-cheese/ will be replaced with localized routes, like my-site.com/en/recipes/mac-and-cheese/ for English and my-site.com/recipes/es/mac-and-cheese/ for Spanish.

In Part 1, we used the gatsby-theme-i18n plugin to automatically add localized routes for each page, and it worked perfectly. However, to make our own version, we first must know what happens underneath the hood of that plugin.

What gatsby-theme-i18n does is modify the createPages process to create a localized version of each page. However, what exactly is createPages?

How Plugins Create Pages

When running npm run build in a fresh Gatsby site, you will see in the terminal what Gatsby is doing, and it looks something like this:

success open and validate gatsby-configs - 0.062 s
success load plugins - 0.915 s
success onPreInit - 0.021 s
success delete html and css files from previous builds - 0.030 s
success initialize cache - 0.034 s
success copy gatsby files - 0.099 s
success onPreBootstrap - 0.034 s
success source and transform nodes - 0.121 s
success Add explicit types - 0.025 s
success Add inferred types - 0.144 s
success Processing types - 0.110 s
success building schema - 0.365 s
success createPages - 0.016 s
success createPagesStatefully - 0.079 s
success onPreExtractQueries - 0.025 s
success update schema - 0.041 s
success extract queries from components - 0.333 s
success write out requires - 0.020 s
success write out redirect data - 0.019 s
success Build manifest and related icons - 0.141 s
success onPostBootstrap - 0.164 s
⠀
info bootstrap finished - 6.932 s
⠀
success run static queries - 0.166 s — 3/3 20.90 queries/second
success Generating image thumbnails — 6/6 - 1.059 s
success Building production JavaScript and CSS bundles - 8.050 s
success Rewriting compilation hashes - 0.021 s
success run page queries - 0.034 s — 4/4 441.23 queries/second
success Building static HTML for pages - 0.852 s — 4/4 23.89 pages/second
info Done building in 16.143999152 sec

As you can see, Gatsby does a lot to ship your React components into static files. In short, it takes five steps:

  1. Source the node objects defined by your plugins on gatsby-config.js and the code in gatsby-node.js.
  2. Create a schema from the nodes object.
  3. Create the pages from your /src/page JavaScript files.
  4. Run the GraphQL queries and inject the data on your pages.
  5. Generate and bundle the static files into the public directory.

And, as you may notice, plugins like gatsby-theme-i18n intervene in step three, specifically when pages are created on createPages:

success createPages - 0.016 s

How exactly does gatsby-theme-i18n access createPages? Well, Gatsby exposes an onCreatePage event handler on the gatsby-node.js to read and modify pages when they are being created.

Learn more about creating and modifying pages and the Gatsby building process over at Gatsby’s official documentation.

Using onCreatePage

The createPages process can be modified in the gatsby-node.js file through the onCreatePage API. In short, onCreatePage is a function that runs each time a page is created by Gatsby. Here’s how it looks:

// ./gatsby-node.js
exports.onCreatePage = ({ page, actions }) => {
  const { createPage, deletePage } = actions;
  // etc.
};

It takes two parameters inside an object:

  • page holds the information of the page that’s going to be created, including its context, path, and the React component associated with it.
  • actions holds several methods for editing the site’s state. In the Gatsby docs, you can see all available methods. For this example we’re making, we will be using two methods: createPage and deletePage, both of which take a page object as the only parameter and, as you might have deduced, they create or delete the page.

So, if we wanted to add a new context to all pages, it would translate to deleting the pages being created and replacing them with new ones that have the desired context:

exports.onCreatePage = ({ page, actions }) => {
  const { createPage, deletePage } = actions;

  deletePage(page);

  createPage({
    ...page,
    context: {
      ...page.context,
      category: `vegan`,
    },
  });
};

Creating The Pages

Since we need to create English and Spanish versions of each page, it would translate to deleting every page and creating two new ones, one for each locale. And to differentiate them, we will assign them a localized route by adding the locale at the beginning of their path.

Let’s start by creating a new gatsby-node.js file in the project’s root directory and adding the following code:

// ./gatsby-node.js

const locales = ["en", "es"];

exports.onCreatePage = ({page, actions}) => {
  const {createPage, deletePage} = actions;

  deletePage(page);

  locales.forEach((locale) => {
    createPage({
      ...page,
      path: `${locale}${page.path}`,
    });
  });
};

Note: Restarting the development server is required to see the changes.

Now, if we go to http://localhost:8000/en/ or http://localhost:8000/es/, we will see all our content there. However, there is a big caveat. Specifically, if we head back to the non-localized routes — like http://localhost:8000/ or http://localhost:8000/recipes/mac-and-cheese/ — Gatsby will throw a runtime error instead of the usual 404 page provided by Gatsby. This is because we deleted our 404 page in the process of deleting all of the other pages!

Well, the 404 page wasn’t exactly deleted because we can still access it if we go to http://localhost:8000/en/404 or http://localhost:8000/es/404. However, we deleted the original 404 page and created two localized versions. Now Gatsby doesn’t know they are supposed to be 404 pages.

To solve it, we need to do something special to the 404 pages at onCreatePage.

Besides a path, every page object has another property called matchPath that Gatsby uses to match the page on the client side, and it is normally used as a fallback when the user reaches a non-existing page. For example, a page with a matchPath property of /recipes/* (notice the wildcard *) will be displayed on each route at my-site.com/recipes/ that doesn’t have a page. This is useful for making personalized 404 pages depending on where the user was when they reached a non-existing page. For instance, social media could display a usual 404 page on my-media.com/non-existing but display an empty profile page on my-media.com/user/non-existing. In this case, we want to display a localized 404 page depending on whether or not the user was on my-site.com/en/not-found or my-site.com/es/not-found.

The good news is that we can modify the matchPath property on the 404 pages:

// gatsby-node.js

const locales = [ "en", "es" ];

exports.onCreatePage = ({ page, actions }) => {
  const { createPage, deletePage } = actions;
  deletePage(page);
  locales.forEach((locale) => {
    const matchPath = page.path.match(/^\/404\/$/) ? (locale === "en" ? /&#42; : /${locale}/&#42;) : page.matchPath;
    createPage({
      ...page,
      path: ${locale}${page.path},
      matchPath,
    });
  });
};

This solves the problem, but what exactly did we do in matchpath? The value we are assigning to the matchPath is asking:

  • Is the page path /404/?
    • No: Leave it as-is.
    • Yes:
      • Is the locale in English?
        • Yes: Set it to match any route.
        • No: Set it to only match routes on that locale.

This results in the English 404 page having a matchPath of /*, which will be our default 404 page; meanwhile, the Spanish version will have matchPath equal /es/* and will only be rendered if the user is on a route that begins with /es/, e.g., my-site.com/es/not-found. Now, if we restart the server and head to a non-existing page, we will be greeted with our usual 404 page.

Besides fixing the runtime error, doing leave us with the possibility of localizing the 404 page, which we didn’t achieve in Part 1 with the gatsby-theme-i18n plugin. That’s already a nice improvement we get by not using a plugin!

Querying Localized Content

Now that we have localized routes, you may notice that both http://localhost:8000/en/ and http://localhost:8000/es/ are querying English and Spanish blog posts. This is because we aren’t filtering our Markdown content on the page’s locale. We solved this in Part 1, thanks to gatsby-theme-i18n injecting the page’s locale on the context of each page, making it available to use as a query variable on the GraphQL query.

In this case, we can also add the locale into the page’s context in the createPage method:

// gatsby-node.js

const locales = [ "en", "es" ];

exports.onCreatePage = ({page, actions}) => {
  const { createPage, deletePage } = actions;
  deletePage(page);
  locales.forEach((locale) => {
    const matchPath = page.path.match(/^\/404\/$/) ? (locale === "en" ? /&#42; : /${locale}/&#42;) : page.matchPath;
    createPage({
      ...page,
      path: ${locale}${page.path},
      context: {
        ...page.context,
        locale,
      },
      matchPath,
    });
  });
};

Note: Restarting the development server is required to see the changes.

From here, we can filter the content on both the homepage and blog posts, which we explained thoroughly in Part 1. This is the index page query:

query IndexQuery($locale: String) {
  allMarkdownRemark(filter: {frontmatter: {locale: {eq: $locale}}}) {
    nodes {
      frontmatter {
        slug
        title
        date
        cover_image {
          image {
            childImageSharp {
              gatsbyImageData
            }
          }
          alt
        }
      }
    }
  }
}

And this is the {markdownRemark.frontmatter__slug}.js page query:

query RecipeQuery($frontmatter__slug: String, $locale: String) {
  markdownRemark(frontmatter: {slug: {eq: $frontmatter__slug}, locale: {eq: $locale}}) {
    frontmatter {
      slug
      title
      date
      cover_image {
        image {
          childImageSharp {
            gatsbyImageData
          }
        }
        alt
      }
    }
    html
  }
}

Now, if we head to http://localhost:8000/en/ or http://localhost:8000/es/, we will only see our English or Spanish posts, depending on which locale we are on.

Creating Localized Links

However, if we try to click on any recipe, it will take us to a 404 page since the links are still pointing to the non-localized recipes. In Part 1, gatsby-theme-i18n gave us a LocalizedLink component that worked exactly like Gatsby’s Link but pointed to the current locale, so we will have to create a LocalizedLink component from scratch. Luckily is pretty easy, but we will have to make some preparation first.

Setting Up A Locale Context

For the LocalizedLink to work, we will need to know the page’s locale at all times, so we will create a new context that holds the current locale, then pass it down to each component. We can implement it on wrapPageElement in the gatsby-browser.js and gatsby-ssr.js Gatsby files. The wrapPageElement is the component that wraps our entire page element. However, remember that Gatsby recommends setting context providers inside wrapRootElement, but in this case, only wrapPageEement can access the page’s context where the current locale can be found.

Let’s create a new directory at ./src/context/ and add a LocaleContext.js file in it with the following code:

// ./src/context/LocaleContext.js

import * as React from "react";
import { createContext } from "react";

export const LocaleContext = createContext();
export const LocaleProvider = ({ locale, children }) => {
  return <LocaleContext.Provider value={locale}>{children}</LocaleContext.Provider>;
};

Next, we will set the page’s context at gatsby-browser.js and gatsby-ssr.js and pass it down to each component:

// ./gatsby-browser.js & ./gatsby-ssr.js

import * as React from "react";
import { LocaleProvider } from "./src/context/LocaleContext";

export const wrapPageElement = ({ element }) => {
  const {locale} = element.props.pageContext;
  return <LocaleProvider locale={locale}>{element}</LocaleProvider>;
};

Note: Restart the development server to load the new files.

Creating LocalizedLink

Now let’s make sure that the locale is available in the LocalizedLink component, which we will create in the ./src/components/LocalizedLink.js file:

// ./src/components/LocalizedLink.js

import * as React from "react";
import { useContext } from "react";
import { Link } from "gatsby";
import { LocaleContext } from "../context/LocaleContext";

export const LocalizedLink = ({ to, children }) => {
  const locale = useContext(LocaleContext);
  return <Link to={`/${locale}${to}`}>{children}</Link>;
};

We can use our LocalizedLink at RecipePreview.js and 404.js just by changing the imports:

// ./src/components/RecipePreview.js

import * as React from "react";
import { LocalizedLink as Link } from "./LocalizedLink";
import { GatsbyImage, getImage } from "gatsby-plugin-image";

export const RecipePreview = ({ data }) => {
  const { cover_image, title, slug } = data;
  const cover_image_data = getImage(cover_image.image.childImageSharp.gatsbyImageData);

  return (
    <Link to={/recipes/${slug}}>
      <h1>{title}</h1>
      <GatsbyImage image={cover_image_data} alt={cover_image.alt} />
    </Link>
  );
};
// ./src/pages/404.js

import * as React from "react";
import { LocalizedLink as Link } from "../components/LocalizedLink";

const NotFoundPage = () => {
  return (
    <main>
      <h1>Page not found</h1>
      <p>
        Sorry 😔 We were unable to find what you were looking for.
        <br />
        <Link to="/">Go Home</Link>.
      </p>
    </main>
  );
};

export default NotFoundPage;
export const Head = () => <title>Not Found</title>;
Redirecting Users

As you may have noticed, we deleted the non-localized pages and replaced them with localized ones, but by doing so, we left the non-localized routes empty with a 404 page. As we did in Part 1, we can solve this by setting up redirects at gatbsy-node.js to take users to the localized version. However, this time we will create a redirect for each page instead of creating a redirect that covers all pages.

These are the redirects from Part 1:

// ./gatsby-node.js

exports.createPages = async ({ actions }) => {
  const { createRedirect } = actions;

  createRedirect({
    fromPath: `/*`,
    toPath: `/en/*`,
    isPermanent: true,
  });

  createRedirect({
    fromPath: `/*`,
    toPath: `/es/*`,
    isPermanent: true,
    conditions: {
      language: [`es`],
    },
  });
};

// etc.

These are the new localized redirects:

// ./gatsby-node.js

exports.onCreatePage = ({ page, actions }) => {
  // Create localize version of pages...
  const { createRedirect } = actions;

  createRedirect({
    fromPath: page.path,
    toPath: `/en${page.path}`,
    isPermanent: true,
  });

  createRedirect({
    fromPath: page.path,
    toPath: `/es${page.path}`,
    isPermanent: true,
    conditions: {
      language: [`es`],
    },
  });
};

// etc.

We won’t see the difference right away since redirects don’t work in development, but if we don’t create a redirect for each page, the localized 404 pages won’t work in production. We didn’t have to do this same thing in Part 1 since gatsby-theme-i18n didn’t localize the 404 page the way we did.

Changing Locales

Another vital feature to add is a language selector component to toggle between the two locales. However, making a language selector isn’t completely straightforward because:

  1. We need to know the current page’s path, like /en/recipes/pizza,
  2. Then extract the recipes/pizza part, and
  3. Add the desired locale, getting /es/recipes/pizza.

Similar to Part 1, we will have to access the page’s location information (URL, HREF, path, and so on) in all of our components, so it will be necessary to set up another context provider at the wrapPageElement function to pass down the location object through context on each page. A deeper explanation can be found in Part 1.

Setting Up A Location Context

First, we will create the location context at ./src/context/LocationContext.js:

// ./src/context/LocationContext.js

import * as React from "react";
import { createContext } from "react";

export const LocationContext = createContext();
export const LocationProvider = ({ location, children }) => {
  return <LocationContext.Provider value={location}>{children}</LocationContext.Provider>;
};

Next, let’s pass the page’s location object to the provider’s location attribute on each Gatsby file:

// ./gatsby-ssr.js & ./gatsby-browser.js

import * as React from "react";
import { LocaleProvider } from "./src/context/LocaleContext";
import { LocationProvider } from "./src/context/LocationContext";

export const wrapPageElement = ({ element, props }) => {
  const { location } = props;
  const { locale } = element.props.pageContext;

  return (
    <LocaleProvider locale={locale}>
      <LocationProvider location={location}>{element}</LocationProvider>
    </LocaleProvider>
  );
};

Creating An i18n Config

For the next step, it will come in handy to create a file with all our i18n details, such as the locale code or the local name. We can do it in a new config.js file in a new i18n/ directory in the root directory of the project.

// ./i18n/config.js

export const config = [
  {
    code: "en",
    hrefLang: "en-US",
    name: "English",
    localName: "English",
  },

  {
    code: "es",
    hrefLang: "es-ES",
    name: "Spanish",
    localName: "Español",
  },
];

The LanguageSelector Component

The last thing is to remove the locale (i.e., es or en) from the path (e.g., /es/recipes/pizza or /en/recipes/pizza). Using the following simple but ugly regex, we can remove all the /en/ and /es/ at the beginning of the path:

/(\/e(s|n)|)(\/*|)/

It’s important to note that the regex pattern only works for the en and es combination of locales.

Now we can create our LanguageSelector component at ./src/components/LanguageSelector.js:

// ./src/components/LanguageSelector.js

import * as React from "react";
import { useContext } from "react";
// 1
import { config } from "../../i18n/config";
import { Link } from "gatsby";
import { LocationContext } from "../context/LocationContext";
import { LocaleContext } from "../context/LocaleContext";

export const LanguageSelector = () => {
// 2
  const locale = useContext(LocaleContext);
// 3
  const { pathname } = useContext(LocationContext);
// 4
  const removeLocalePath = /(\/e(s|n)|)(\/*|)/;
  const pathnameWithoutLocale = pathname.replace(removeLocalePath, "");
// 5
  return (
    <div>
      { config.map(({code, localName}) => {
        return (
          code !== locale && (
            <Link key={code} to={`/${code}/${pathnameWithoutLocale}`}>
              {localName}
            </Link>
          )
        );
      }) }
    </div>
);
};

Let’s break down what is happening in that code:

  1. We get our i18n configurations from the ./i18n/config.js file instead of the useLocalization hook that was provided by the gatsby-theme-i18n plugin in Part 1.
  2. We get the current locale through context.
  3. We find the page’s current pathname through context, which is the part that comes after the domain (e.g., /en/recipes/pizza).
  4. We remove the locale part of the pathname using the regex pattern (leaving just recipes/pizza).
  5. We render a link for each available locale except the current one. So we check if the locale is the same as the page before rendering a common Gatsby Link to the desired locale.

Now, inside our gatsby-ssr.js and gatsby-browser.js files, we can add our LanguageSelector, so it is available globally on the site at the top of all pages:

// ./gatsby-ssr.js & ./gatsby-browser.js

import * as React from "react";
import { LocationProvider } from "./src/context/LocationContext";
import { LocaleProvider } from "./src/context/LocaleContext";
import { LanguageSelector } from "./src/components/LanguageSelector";

export const wrapPageElement = ({ element, props }) => {
  const { location } = props;
  const { locale } = element.props.pageContext;

  return (
    <LocaleProvider locale={locale}>
      <LocationProvider location={location}>
        <LanguageSelector />
        {element}
      </LocationProvider>
    </LocaleProvider>
  );
};
Localizing Static Content

The last thing to do would be to localize the static content on our site, like the page titles and headers. To do this, we will need to save our translations in a file and find a way to display the correct one depending on the page’s locale.

Page Body Translations

In Part 1, we used the react-intl package for adding our translations, but we can do the same thing from scratch. First, we will need to create a new translations.js file in the /i18n folder that holds all of our translations.

We will create and export a translations object with two properties: en and es, which will hold the translations as strings under the same property name.

// ./i18n/translations.js

export const translations = {
  en: {
    index_page_title: "Welcome to my English cooking blog!",
    index_page_subtitle: "Written by Juan Diego Rodríguez",
    not_found_page_title: "Page not found",
    not_found_page_body: "😔 Sorry, we were unable find what you were looking for.",
    not_found_page_back_link: "Go Home",
  },
  es: {
    index_page_title: "¡Bienvenidos a mi blog de cocina en español!",
    index_page_subtitle: "Escrito por Juan Diego Rodríguez",
    not_found_page_title: "Página no encontrada",
    not_found_page_body: "😔 Lo siento, no pudimos encontrar lo que buscabas",
    not_found_page_back_link: "Ir al Inicio",
  },
};

We know the page’s locale from the LocaleContext we set up earlier, so we can load the correct translation using the desired property name.

The cool thing is that no matter how many translations we add, we won’t bloat our site’s bundle size since Gatsby builds the entire app into a static site.

// ./src/pages/index.js

// etc.

import { LocaleContext } from "../context/LocaleContext";
import { useContext } from "react";
import { translations } from "../../i18n/translations";

const IndexPage = ({ data }) => {
  const recipes = data.allMarkdownRemark.nodes;
  const locale = useContext(LocaleContext);

  return (
    <main>
      <h1>{translations[locale].index_page_title}</h1>
      <h2>{translations[locale].index_page_subtitle}</h2>
      {recipes.map(({frontmatter}) => {
        return <RecipePreview key={frontmatter.slug} data={frontmatter} />;
      })}
    </main>
  );
};

// etc.
// ./src/pages/404.js

// etc.

import { LocaleContext } from "../context/LocaleContext";
import { useContext } from "react";
import { translations } from "../../i18n/translations";

const NotFoundPage = () => {
  const locale = useContext(LocaleContext);

  return (
    <main>
      <h1>{translations[locale].not_found_page_title}</h1>
      <p>
        {translations[locale].not_found_page_body} <br />
        <Link to="/">{translations[locale].not_found_page_back_link}</Link>.
      </p>
    </main>
  );
};

// etc.

Note: Another way we can access the locale property is by using pageContext in the page props.

Page Title Translations

We ought to localize the site’s page titles the same way we localized our page content. However, in Part 1, we used react-helmet for the task since the LocaleContext isn’t available at the Gatsby Head API. So, to complete this task without resorting to a third-party plugin, we will take a different path. We’re unable to access the locale through the LocaleContext, but as I noted above, we can still get it with the pageContext property in the page props.

// ./src/page/index.js

// etc.

export const Head = ({pageContext}) => {
  const {locale} = pageContext;
  return <title>{translations[locale].index_page_title}</title>;
};

// etc.
// ./src/page/404.js

// etc.

export const Head = ({pageContext}) => {
  const {locale} = pageContext;
  return <title>{translations[locale].not_found_page_title}</title>;
};

// etc.
Formatting

Remember that i18n also covers formatting numbers and dates depending on the current locale. We can use the Intl object from the JavaScript Internationalization API. The Intl object holds several constructors for formatting numbers, dates, times, plurals, and so on, and it’s globally available in JavaScript.

In this case, we will use the Intl.DateTimeFormat constructor to localize dates in blog posts. It works by creating a new Intl.DateTimeFormat object with the locale as its parameter:

const DateTimeFormat = new Intl.DateTimeFormat("en");

The new Intl.DateTimeFormat and other Intl instances have several methods, but the main one is the format method, which takes a Date object as a parameter.

const date = new Date();
console.log(new Intl.DateTimeFormat("en").format(date)); // 4/20/2023
console.log(new Intl.DateTimeFormat("es").format(date)); // 20/4/2023

The format method takes an options object as its second parameter, which is used to customize how the date is displayed. In this case, the options object has a dateStyle property to which we can assign "full", "long", "medium", or "short" values depending on our needs:

const date = new Date();

console.log(new Intl.DateTimeFormat("en", {dateStyle: "short"}).format(date)); // 4/20/23
console.log(new Intl.DateTimeFormat("en", {dateStyle: "medium"}).format(date)); // Apr 20, 2023
console.log(new Intl.DateTimeFormat("en", {dateStyle: "long"}).format(date)); // April 20, 2023
console.log(new Intl.DateTimeFormat("en", {dateStyle: "full"}).format(date)); // Thursday, April 20, 2023

In the case of our blog posts publishing date, we will set the dateStyle to "long".

// ./src/pages/recipes/{markdownRemark.frontmatter__slug}.js

// etc.

const RecipePage = ({ data, pageContext }) => {
  const { html, frontmatter } = data.markdownRemark;
  const { title, cover_image, date } = frontmatter;
  const { locale } = pageContext;
  const cover_image_data = getImage(cover_image.image.childImageSharp.gatsbyImageData);

  return (
    <main>
      <h1>{title}</h1>
      <p>{new Intl.DateTimeFormat(locale, { dateStyle: "long" }).format(new Date(date))}</p>
      <GatsbyImage image={cover_image_data} alt={cover_image.alt} />
      <p dangerouslySetInnerHTML={{__html: html}}></p>
    </main>
  );
};

// etc.
Conclusion

And just like that, we reduced the need for several i18n plugins to a grand total of zero. And we didn’t even lose any functionality in the process! If anything, our hand-rolled solution is actually more robust than the system of plugins we cobbled together in Part 1 because we now have localized 404 pages.

That said, both approaches are equally valid, but in times when Gatsby plugins are unsupported in some way or conflict with other plugins, it is sometimes better to create your own i18n solution. That way, you don’t have to worry about plugins that are outdated or left unmaintained. And if there is a conflict with another plugin, you control the code and can fix it. I’d say these sorts of benefits greatly outweigh the obvious convenience of installing a ready-made, third-party solution.

]]>
hello@smashingmagazine.com (Juan Diego Rodríguez)
<![CDATA[What Was SmashingConf In San Franciso Like?]]> https://smashingmagazine.com/2023/06/smashingconf-sanfrancisco2023-recap/ https://smashingmagazine.com/2023/06/smashingconf-sanfrancisco2023-recap/ Fri, 16 Jun 2023 08:00:00 GMT “Give them sweet memories.”

It was an unexpected suggestion from one of the Smashing event organizers when I asked for guidance on this article. But then, so much of the week had been unexpected. As a baby dev, volunteering at industry events is a no-brainer; I’ve been to nine this year, and this Smashing Conference has definitely been the standout.

There was none of the frenzied desperation that characterizes so many conferences; rather, the atmosphere was relaxed and casual. I talked to anyone who didn’t actively flee my approach, so by the end of the week, I’d spoken with guests, speakers, sponsors, fellow volunteers, catering staff, and the bouncer at the afterparty. Most people described the week as “fun” and “intimate,” not what one usually expects from a tech conference, although returning guests clearly did expect it.

I believe this pleasant expectation, this trust in Smashing to create something good, was the foundation of the get-together-with-friends vibe at the event. First-timers (myself included) were welcomed and soon made a happy part of the community. Many solo attendees ended the week with the intention of returning next year with their entire team in tow.

A significant reason for this welcoming feeling was the schedule. Speakers were arranged in a single track, on a single stage, thus avoiding the dreaded either/or dilemma and relieving guests and speakers alike of the need to rush around in search of their next session. Breaks were long enough to enjoy lunch at a relaxed pace and to socialize — I even spotted a couple of impromptu chess matches in the lobby.

For those who wished to continue learning over sandwiches and orzo, there were optional lunch sessions held in the workshop building. These sessions were well-attended, and it was heartening to see such honest enthusiasm for the subject matter.

The speakers were very accessible — everyone loved how they were happy to meet, not just for fist bumps but for meaningful conversations. I overheard a group squealing like K-Pop fans about the excellent chat they’d had with their favorite speaker.

As a volunteer, it wasn’t always feasible to sit in the theatre and enjoy the talks in person, but it turned out that missing content wasn’t a concern: presentations were streamed live in the lobby, complete with closed captioning.

Presentation topics seemed to have been thoughtfully curated, such that hardly anyone could settle on a single favorite. For the familiar topics, there was professional eagerness. For the unfamiliar ones, there was first polite interest, then appreciation. The crowd always emerged for caffeine and snacks eager to gather and talk about their recent revelations.

I’ve personally heard from several people who are already trying out ideas they haven’t heard of before.

“I didn’t know [frequently-used tech] could do all that!”

As for the hands-on workshops, I actually heard someone describe these deep dive sessions as “magic.” Workshop topics were practical and, one could argue, essential, including accessibility, flexibility, performance, and more. The breakroom chatter sounded like a huge improv troupe riffing on the theme of shameless plugs for workshops.

“I can’t wait to use this at work — this is going to make [task I don’t understand yet] so much faster!”
“I can’t believe how much I’m learning in just a few hours!”

It was amusing, and exciting.

If the speaker presentations, lunch sessions, and full-day workshops weren’t enough for the lifelong learners in attendance, the conference also featured Jam Sessions — an evening of dinner, drinks, and “lightning talks” designed to spark curiosity and interest in fascinating mini-topics. I’m grateful to have been able to present the closing talk on “Developing Emotional Resilience” that night, and if you’re wondering whether you should give a talk of your own next time, the answer is a resounding YES.

Beyond all this quality content, the event organizers had also planned a number of purely fun activities. A Golden Gate 5k kicked off each morning and attracted a dozen of the cheeriest faces I’ve seen on this side of the bridge at any hour. Alcatraz, sailboats, and sea lion pups completed the quintessential San Francisco summer scene (the freezing winds were also quintessential San Francisco summer).

As the only Bay Area native volunteer, I had the honor of leading the photo walk around the picturesque Presidio neighborhood. I’d been expecting a group size comparable to the morning jogs, but over thirty determined photographers showed up for the tour. Together, we visited several popular destinations and braved the famous Lyon Street steps, but the crowd favorite had to be the Yoda fountain at Lucasfilm. Nerds.

After the first conference day, a good crowd met up for the afterparty at Emporium, where drink tickets and game tokens were plentiful. Between pinball, arcade games, and seemingly endless other entertainments, the party was a hit with the night owls.

The Smashing organizers really wanted people to enjoy themselves, and even a bookish misanthrope like me couldn’t help but have a great time. Many of the chattiest people I met that week later confessed, in nearly the exact same words:

“You know, I’m actually an introvert. I usually dread social events — but it feels so comfortable here!”

I had to agree. Thanks to early access to the Smashing Slack channel, we were able to get acquainted in advance and meet in person as not-quite-strangers. More than that, the emphasis on kindness and open-mindedness seemed to attract the loveliest people.

I made more friends in those few days than I had in my whole adult life in the same city. In the week following the conference, I’ve had brunch with an East Coast engineer, lunch and an office tour with a San Francisco team, a laugh-filled hour-long video call with an exec in Uruguay, and I’ve been invited to a group project with an energetic pack of devs dispersed across the country, but connected by our love of coding and cats. I’ve exchanged recipes with a Senior Engineer, book recommendations with an Engineering Manager, and Instagram handles with enough people to start our own mid-sized company. I wonder what kinds of connections others were able to make!

In terms of networking, Smashing was unparalleled, yet it felt like we didn’t “network” at all. We certainly learned a lot, and we have some new LinkedIn connections, but unexpectedly, we made honest-to-goodness friends. As far as I’m concerned, that’s more than a sweet memory. It’s a sweet beginning!

If you’d like to join the SmashingConf team next time, feel free to apply as a volunteer yourself anytime. There are even discounts for students and non-profits available — all you need to do is reach out to the team!

  • SmashingConf Freiburg 🇩🇪 (in-person + online, Sep 4–6) with adventures into design systems, accessibility, CSS, JS and web performance.
  • SmashingConf Antwerp 🇧🇪 (Oct 9–11), on design systems, usability, product design and complex UI challenges.
]]>
hello@smashingmagazine.com (Ren Chen)
<![CDATA[Meet Codux: The React Visual Editor That Improves Developer Experience]]> https://smashingmagazine.com/2023/06/codux-react-visual-editor-improves-developer-experience/ https://smashingmagazine.com/2023/06/codux-react-visual-editor-improves-developer-experience/ Thu, 15 Jun 2023 08:00:00 GMT This article is a sponsored by Wix

Personally, I get tired of the antics at the start of any new project. I’m a contractor, too, so there’s always some new dependency I need to adopt, config files to force me to write the way a certain team likes, and deployment process I need to plug into. It’s never a fire-up-and-go sort of thing, and it often takes the better part of a working day to get it all right.

There are a lot of moving pieces to a project, right? Everything — from integrating a framework and establishing a component library to collaboration and deployments — is a separate but equally important part of your IDE. If you’re like me, jumping between apps and systems is something you get used to. But honestly, it’s an act of Sisyphus rolling the stone up the mountain each time, only to do it again on the next project.

That’s the setup for what I think is a pretty darn good approach to streamline this convoluted process in a way that supports any common project structure and is capable of enhancing it with visual editing capabilities. It’s called Codux, and if you stick with me for a moment, I think you’ll agree that Codux could be the one-stop shop for everything you need to build production-ready React apps.

Codux is More “Your-Code” Than "Low-Code"

I know, I know. "Yay, another visual editor!" says no one, ever. The planet is already full of those, and they’re really designed to give folks developer superpowers without actually doing any development.

That’s so not the case with Codux. There are indeed a lot of "low-code" affordances that could empower non-developers, but that’s not the headlining feature of Codux or really who or what it caters to. Instead, Codux is a fully-integrated IDE that provides the bones of your project while improving the developer experience instead of abstracting it away.

Do you use CodePen? What makes it so popular (and great to use) is that it "just" works. It combines frameworks, preprocessors, a live rendering environment, and modern build tools into a single interface that does all the work on "Save". But I still get to write code in a single place, the way I like it.

I see Codux a lot like that. But bigger. Not bigger in the sense of more complicated, but bigger in that it is more integrated than frameworks and build tools. It _is_ your framework. It _is_ your component library. It _is_ your build process. And it just so happens to have incredibly powerful visual editing controls that are fully integrated with your code editor.

That’s why it makes more sense to call Codux “your code” instead of the typical low-code or no-code visual editing tools. Those are designed for non-developers. Codux, on the other hand, is made for developers.

In fact, here’s a pretty fun thing to do. Open a component file from your project in VS Code and put the editor window next to the Codux window open to the same component. Make a small CSS change or something and watch both the preview rendering and code update instantly in Codux.

That’s just one of those affordances that really polish up the developer experience. Anyone else might overlook something like this, but as a developer, you know how much saved time can add up with something like this.

Code, Inspect And Debug Together At Last

There are a few other affordances available when selecting an element on the interactive stage on Codux:

  • A style panel for editing CSS and trying different layouts. And, again, changes are made in real-time, both in the rendered preview and in your code, which is visible to you all the time — whether directly in Codux or in your IDE.
  • A property panel that provides easy access to all the selected properties of a component with visual controllers to modify them (and see the changes reflected directly in the code)
  • An environment panel that provides you with control over the rendering environment of the component, such as the screen or canvas size, as well as the styling for it.

Maybe Give Codux A Spin

It’s pretty rad that I can fire up a single app to access my component library, code, documentation, live previews, DOM inspector, and version control. If you would’ve tried explaining this to me before seeing Codux, I would’ve said that’s too much for one app to handle; it’d be a messy UI that’s more aspiration than it is a liberating center of development productivity.

No lying. That’s exactly what I thought when the Wix team told me about it. I didn’t even think it was a good idea to pack all that in one place.

But they did, and I was dead wrong. Codux is pretty awesome. And apparently, it will be even more awesome because the FAQ talks about a bunch of new features in the works, things like supporting full frameworks. The big one is an online version that will completely remove the need to set up development environments every time someone joins the team, or a stakeholder wants access to a working version of the app. Again, this is all in the works, but it goes to show how Codux is all about improving the developer experience.

And it’s not like you’re building a Wix site with it. Codux is its own thing — something that Wix built to get rid of their own pain points in the development process. It just so happens that their frustrations are the same that many of us in the community share, which makes Codux a legit consideration for any developer or team.

Oh, and it’s free. You can download it right now, and it supports Windows, Mac, and Linux. In other words, you can give it a spin without buying into anything.

]]>
hello@smashingmagazine.com (Geoff Graham)
<![CDATA[How To Build Server-Side Rendered (SSR) Svelte Apps With SvelteKit]]> https://smashingmagazine.com/2023/06/build-server-side-rendered-svelte-apps-sveltekit/ https://smashingmagazine.com/2023/06/build-server-side-rendered-svelte-apps-sveltekit/ Wed, 14 Jun 2023 11:00:00 GMT I’m not interested in starting a turf war between server-side rendering and client-side rendering. The fact is that SvelteKit supports both, which is one of the many perks it offers right out of the box. The server-side rendering paradigm is not a new concept. It means that the client (i.e., the user’s browser) sends a request to the server, and the server responds with the data and markup for that particular page, which is then rendered in the user’s browser.

To build an SSR app using the primary Svelte framework, you would need to maintain two codebases, one with the server running in Node, along with with some templating engine, like Handlebars or Mustache. The other application is a client-side Svelte app that fetches data from the server.

The approach we’re looking at in the above paragraph isn’t without disadvantages. Two that immediately come to mind that I’m sure you thought of after reading that last paragraph:

  1. The application is more complex because we’re effectively maintaining two systems.
  2. Sharing logic and data between the client and server code is more difficult than fetching data from an API on the client side.
SvelteKit Simplifies The Process

SvelteKit streamlines things by handling of complexity of the server and client on its own, allowing you to focus squarely on developing the app. There’s no need to maintain two applications or do a tightrope walk sharing data between the two.

Here’s how:

  • Each route can have a page.server.ts file that’s used to run code in the server and return data seamlessly to your client code.
  • If you use TypeScript, SvelteKit auto-generates types that are shared between the client and server.
  • SvelteKit provides an option to select your rendering approach based on the route. You can choose SSR for some routes and CSR for others, like maybe your admin page routes.
  • SvelteKit also supports routing based on a file system, making it much easier to define new routes than having to hand-roll them yourself.
SvelteKit In Action: Job Board

I want to show you how streamlined the SvelteKit approach is to the traditional way we have been dancing between the SSR and CSR worlds, and I think there’s no better way to do that than using a real-world example. So, what we’re going to do is build a job board — basically a list of job items — while detailing SvelteKit’s role in the application.

When we’re done, what we’ll have is an app where SvelteKit fetches the data from a JSON file and renders it on the server side. We’ll go step by step.

First, Initialize The SvelteKit Project

The official SvelteKit docs already do a great job of explaining how to set up a new project. But, in general, we start any SvelteKit project in the command line with this command:

npm create svelte@latest job-list-ssr-sveltekit

This command creates a new project folder called job-list-ssr-sveltekit on your machine and initializes Svelte and SvelteKit for us to use. But we don’t stop there — we get prompted with a few options to configure the project:

  1. First, we select a SvelteKit template. We are going to stick to using the basic Skeleton Project template.
  2. Next, we can enable type-checking if you’re into that. Type-checking provides assistance when writing code by watching for bugs in the app’s data types. I’m going to use the “TypeScript syntax” option, but you aren’t required to use it and can choose the “None” option instead.

There are additional options from there that are more a matter of personal preference:

If you are familiar with any of these, you can add them to the project. We are going to keep it simple and not select anything from the list since what I really want to show off is the app architecture and how everything works together to get data rendered by the app.

Now that we have the template for our project ready for us let’s do the last bit of setup by installing the dependencies for Svelte and SvelteKit to do their thing:

cd job-listing-ssr-sveltekit
npm install

There’s something interesting going on under the hood that I think is worth calling out:

Is SvelteKit A Dependency?

If you are new to Svelte or SvelteKit, you may be pleasantly surprised when you open the project’s package.json file. Notice that the SvelteKit is listed in the devDependencies section. The reason for that is Svelte (and, in turn, SvelteKit) acts like a compiler that takes all your .js and .svelte files and converts them into optimized JavaScript code that is rendered in the browser.

This means the Svelte package is actually unnecessary when we deploy it to the server. That’s why it is not listed as a dependency in the package file. The final bundle of our job board app is going to contain just the app’s code, which means the size of the bundle is way smaller and loads faster than the regular Svelte-based architecture.

Look at how tiny and readable the package-json file is!

{
    "name": "job-listing-ssr-sveltekit",
    "version": "0.0.1",
    "private": true,
    "scripts": {
        "dev": "vite dev",
        "build": "vite build",
        "preview": "vite preview",
        "check": "svelte-kit sync && svelte-check --tsconfig ./tsconfig.json",
        "check:watch": "svelte-kit sync && svelte-check --tsconfig ./tsconfig.json --watch"
    },
    "devDependencies": {
        "@sveltejs/adapter-auto": "^2.0.0",
        "@sveltejs/kit": "^1.5.0",
        "svelte": "^3.54.0",
        "svelte-check": "^3.0.1",
        "tslib": "^2.4.1",
        "typescript": "^4.9.3",
        "vite": "^4.0.0"
    },
    "type": "module"
}

I really find this refreshing, and I hope you do, too. Seeing a big list of packages tends to make me nervous because all those moving pieces make the entirety of the app architecture feel brittle and vulnerable. The concise SvelteKit output, by contrast, gives me much more confidence.

Creating The Data

We need data coming from somewhere that can inform the app on what needs to be rendered. I mentioned earlier that we would be placing data in and pulling it from a JSON file. That’s still the plan.

As far as the structured data goes, what we need to define are properties for a job board item. Depending on your exact needs, there could be a lot of fields or just a few. I’m going to proceed with the following:

  • Job title,
  • Job description,
  • Company Name,
  • Compensation.

Here’s how that looks in JSON:

[{
    "job_title": "Job 1",
    "job_description": "Very good job",
    "company_name": "ABC Software Company",
    "compensation_per_year": "$40000 per year"
}, {
    "job_title": "Job 2",
    "job_description": "Better job",
    "company_name": "XYZ Software Company",
    "compensation_per_year": "$60000 per year"
}]

Now that we’ve defined some data let’s open up the main project folder. There’s a sub-directory in there called src. We can open that and create a new folder called data and add the JSON file we just made to it. We will come back to the JSON file when we work on fetching the data for the job board.

Adding TypeScript Model

Again, TypeScript is completely optional. But since it’s so widely used, I figure it’s worth showing how to set it up in a SvelteKit framework.

We start by creating a new models.ts file in the project’s src folder. This is the file where we define all of the data types that can be imported and used by other components and pages, and TypeScript will check them for us.

Here’s the code for the models.ts file:

export type JobsList = JobItem[]

export interface JobItem {
  job_title: string
  job_description: string
  company_name: string
  compensation_per_year: string
}

There are two data types defined in the code:

  1. JobList contains the array of job items.
  2. JobItem contains the job details (or properties) that we defined earlier.
The Main Job Board Page

We’ll start by developing the code for the main job board page that renders a list of available job items. Open the src/routes/+page.svelte file, which is the main job board. Notice how it exists in the /src/routes folder? That’s the file-based routing system I referred to earlier when talking about the benefits of SvelteKit. The name of the file is automatically generated into a route. That’s a real DX gem, as it saves us time from having to code the routes ourselves and maintaining more code.

While +page.svelte is indeed the main page of the app, it’s also the template for any generic page in the app. But we can create a separation of concerns by adding more structure in the /scr/routes directory with more folders and sub-folders that result in different paths. SvelteKit’s docs have all the information you need for routing and routing conventions.

This is the markup and styles we’ll use for the main job board:

<div class="home-page">
  <h1>Job Listing Home page</h1>
</div>

<style>
  .home-page {
    padding: 2rem 4rem;
    display: flex;
    align-items: center;
    flex-direction: column;
    justify-content: center;
  }
</style>

Yep, this is super simple. All we’re adding to the page is an <h1> tag for the page title and some light CSS styling to make sure the content is centered and has some nice padding for legibility. I don’t want to muddy the waters of this example with a bunch of opinionated markup and styles that would otherwise be a distraction from the app architecture.

Run The App

We’re at a point now where we can run the app using the following in the command line:

npm run dev -- --open

The -- --open argument automatically opens the job board page in the browser. That’s just a small but nice convenience. You can also navigate to the URL that the command line outputs.

The Job Item Component

OK, so we have a main job board page that will be used to list job items from the data fetched by the app. What we need is a new component specifically for the jobs themselves. Otherwise, all we have is a bunch of data with no instructions for how it is rendered.

Let’s take of that by opening the src folder in the project and creating a new sub-folder called components. And in that new /src/components folder, let’s add a new Svelte file called JobDisplay.svelte.

We can use this for the component’s markup and styles:

<script lang="ts">
  import type { JobItem } from "../models";
  export let job: JobItem;
</script>

<div class="job-item">
  <p>Job Title: <b>{job.job_title}</b></p>
  <p>Description: <b>{job.job_description}</b></p>
  <div class="job-details">
    <span>Company Name : <b>{job.company_name}</b></span>
    <span>Compensation per year: <b>{job.compensation_per_year}</b></span>
  </div>
</div>

<style>
  .job-item {
    border: 1px solid grey;
    padding: 2rem;
    width: 50%;
    margin: 1rem;
    border-radius: 10px;
  }

  .job-details {
    display: flex;
    justify-content: space-between;
  }
</style>

Let’s break that down so we know what’s happening:

  1. At the top, we import the TypeScript JobItem model.
  2. Then, we define a job prop with a type of JobItem. This prop is responsible for getting the data from its parent component so that we can pass that data to this component for rendering.
  3. Next, the HTML provides this component’s markup.
  4. Last is the CSS for some light styling. Again, I’m keeping this super simple with nothing but a little padding and minor details for structure and legibility. For example, justify-content: space-between adds a little visual separation between job items.

Fetching Job Data

Now that we have the JobDisplay component all done, we’re ready to pass it data to fill in all those fields to be displayed in each JobDisplay rendered on the main job board.

Since this is an SSR application, the data needs to be fetched on the server side. SvelteKit makes this easy by having a separate load function that can be used to fetch data and used as a hook for other actions on the server when the page loads.

To fetch, let’s create yet another new file TypeScript file — this time called +page.server.ts — in the project’s routes directory. Like the +page.svelte file, this also has a special meaning which will make this file run in the server when the route is loaded. Since we want this on the main job board page, we will create this file in the routes directory and include this code in it:

import jobs from ’../data/job-listing.json’
import type { JobsList } from ’../models’;

const job_list: JobsList = jobs;

export const load = (() => {
  return {
    job_list
  };
})

Here’s what we’re doing with this code:

  1. We import data from the JSON file. This is for simplicity purposes. In the real app, you would likely fetch this data from a database by making an API call.
  2. Then, we import the TypeScript model we created for JobsList.
  3. Next, we create a new job_list variable and assign the imported data to it.
  4. Last, we define a load function that will return an object with the assigned data. SvelteKit will automatically call this function when the page is requested. So, the magic for SSR code happens here as we fetch the data in the server and build the HTML with the data we get back.
Accessing Data From The Job Board

SvelteKit makes accessing data relatively easy by passing data to the main job board page in a way that checks the types for errors in the process. We can import a type called PageServerData in the +page.svelte file. This type is autogenerated and will have the data returned by the +page.server.ts file. This is awesome, as we don’t have to define types again when using the data we receive.

Let’s update the code in the +page.svelte file, like the following:

<script lang="ts">
  import JobDisplay from ’../components/JobDisplay.svelte’;
  import type { PageServerData } from ’./$types’;

  export let data: PageServerData;
</script>

<div class="home-page">
  <h1>Job Listing Home page</h1>

  {#each data.job_list as job}
    <JobDisplay job={job}/>
  {/each}
</div>

<style>....</style>

This is so cool because:

  1. The #each syntax is a Svelte benefit that can be used to repeat the JobDisplay component for all the jobs for which data exists.
  2. At the top, we are importing both the JobDisplay component and PageServerData type from ./$types, which is autogenerated by SvelteKit.

Deploying The App

We’re ready to compile and bundle this project in preparation for deployment! We get to use the same command in the Terminal as most other frameworks, so it should be pretty familiar:

npm run build

Note: You might get the following warning when running that command: “Could not detect a supported production environment.” We will fix that in just a moment, so stay with me.

From here, we can use the npm run preview command to check the latest built version of the app:

npm run preview

This process is a new way to gain confidence in the build locally before deploying it to a production environment.

The next step is to deploy the app to the server. I’m using Netlify, but that’s purely for example, so feel free to go with another option. SvelteKit offers adapters that will deploy the app to different server environments. You can get the whole list of adapters in the docs, of course.

The real reason I’m using Netlify is that deploying there is super convenient for this tutorial, thanks to the adapter-netlify plugin that can be installed with this command:

npm i -D @sveltejs/adapter-netlify

This does, indeed, introduce a new dependency in the package.json file. I mention that because you know how much I like to keep that list short.

After installation, we can update the svelte.config.js file to consume the adapter:

import adapter from ’@sveltejs/adapter-netlify’;
import { vitePreprocess } from ’@sveltejs/kit/vite’;

/** @type {import(’@sveltejs/kit’).Config} */
const config = {
    preprocess: vitePreprocess(),

    kit: {
        adapter: adapter({
            edge: false, 
            split: false
        })
    }
};

export default config;

Real quick, this is what’s happening:

  1. The adapter is imported from adapter-netlify.
  2. The new adapter is passed to the adapter property inside the kit.
  3. The edge boolean value can be used to configure the deployment to a Netlify edge function.
  4. The split boolean value is used to control whether we want to split each route into separate edge functions.

More Netlify-Specific Configurations

Everything from here on out is specific to Netlify, so I wanted to break it out into its own section to keep things clear.

We can add a new file called netlify.toml at the top level of the project folder and add the following code:

[build]
  command = "npm run build"
  publish = "build"

I bet you know what this is doing, but now we have a new alias for deploying the app to Netlify. It also allows us to control deployment from a Netlify account as well, which might be a benefit to you. To do this, we have to:

  1. Create a new project in Netlify,
  2. Select the “Import an existing project” option, and
  3. Provide permission for Netlify to access the project repository. You get to choose where you want to store your repo, whether it’s GitHub or some other service.

Since we have set up the netlify.toml file, we can leave the default configuration and click the “Deploy” button directly from Netlify.

Once the deployment is completed, you can navigate to the site using the provided URL in Netlify. This should be the final result:

Here’s something fun. Open up DevTools when viewing the app in the browser and notice that the HTML contains the actual data we fetched from the JSON file. This way, we know for sure that the right data is rendered and that everything is working.

Note: The source code of the whole project is available on GitHub. All the steps we covered in this article are divided as separate commits in the main branch for your reference.

Conclusion

In this article, we have learned about the basics of server-side rendered apps and the steps to create and deploy a real-life app using SvelteKit as the framework. Feel free to share your comments and perspective on this topic, especially if you are considering picking SvelteKit for your next project.

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Sriram Thiagarajan)
<![CDATA[Smashing Podcast Episode 62 With Slava Shestopalov: What Is Design Management?]]> https://smashingmagazine.com/2023/06/smashing-podcast-episode-62/ https://smashingmagazine.com/2023/06/smashing-podcast-episode-62/ Tue, 13 Jun 2023 14:00:00 GMT In this episode of The Smashing Podcast, we ask what is a design manager? What does it take and how does it relate to the role of Designer? Vitaly talks to Slava Shestopalov to find out.

Show Notes

Weekly Update

Transcript

Vitaly: He’s a design leader, lecturer and design educator. He has seen it all working as a graphic designer in his early years and then, moving to digital products, UX, accessibility and design management. Most recently, he has worked as a lead designer and design manager in a software development company, Alex, and then, later, Bolt, the all-in-one mobility app. Now, he’s very keen on building bridges between various areas of knowledge rather than specializing in one single thing, and we’ll talk about that as well. He also loves to write, he has a passion for medieval style UX design myths. Who doesn’t? And is passionate about street and architecture photos. Originally from Cherkasy, Ukraine, he now lives in Berlin with his wonderful wife, Aksano. So we know that he’s an experienced designer and design manager, but did you know that he also loves biking, waking up at 5:00 AM to explore cities and can probably talk for hours about every single water tower in your city. My Smashing friends, please welcome Slava Shestopalov. Hello Slava. How are you doing today?

Slava: I am Smashing.

Vitaly: Oh yes, always.

Slava: Or at least I was told to say that.

Vitaly: Okay, so that’s a fair assessment in this case. It’s always a pleasure to meet you and to see you. I know so many things about you. I know that you’re very pragmatic. I know that you always stay true to your words. I know that you care about the quality of your work. But it’s always a pleasure to hear a personal story from somebody who’s kind of explaining where they’re coming from, how they ended up where they are today. So maybe I could ask you first to kind of share your story. How did you arrive kind of where you are today? Where you coming from or where you’re going? That’s very philosophical, but let’s start there.

Slava: That’s quite weird. I mean, my story is quite weird because I’m a journalist by education and I never thought of being a designer at school or the university. During my study years, I dreamt about something else. Maybe I didn’t really have a good idea of my future profession rather about the feeling that it should bring, that it should be something interesting, adventurous, something connected with helping other people. I dreamt about being a historian, geographer, maybe traveling in the pursuit of new adventures or inventions, but ended up being a journalist.

Slava: My parents recommended me choose this path because they thought I was quite talkative person and it would’ve been a great application for such a skill. And since I didn’t have any better ideas, I started studying at the university, studying journalism. And then, on the third year studying, during our practice, and by the way, I met my wife there, under the university, we are together since the first day of studying, we were in the same academic group, not only on the same faculty, and we were passing our journalistic practice at the Press Department of the local section of the Ministry of Emergencies, meaning that we were writing articles about various accidents happening in the Cherkasy region, taking photos of, sometimes, not very funny things. And accidentally, there I tried CorelDRAW, there is the whole generation of designers who don’t even know what those words mean.

Vitaly: Well, you don’t use CorelDRAW anymore, do you?

Slava: Not anymore. I don’t even know whether this software is still available. So I accidentally tried that in our editorial office where, as our practices, was not even real work. And somehow, it was more or less okay. I created the first layout. Of course, now I am scared to look at it. I don’t even have it saved somewhere on my computer. That’s an abomination, not design. But back then, it worked out and I started developing this skill as a secondary skill. I’m a self-taught designer, so never had any systematic way of learning design, rather learning based on my own mistakes, trying something new, producing a lot of work that I’m not proud of.

Vitaly: But also, I’m sure work that you are proud of.

Slava: Yeah. But then, later, I joined first small design studios and I’m forever thankful to my, back then, art director who once came to my desk, looked at the layout on my screen and told me, "Slava, please don’t get offense, but there is a book that you have to read." And he handed me the book Design for Non-Designers. That’s an amazing book, I learned a lot from it, the basics of composition, contrast, alignment, the visual basics. And I started applying it to my work, it got better. Then of course, I read many more books for designers, but also, books on design, on business and management and other topics. And gradually, by participating in more and more complex projects, I got to the position where I am right now.

Vitaly: So it’s interesting for me because actually I remember my days coming also without any formal education as a designer, I actually ended up just playing with boxes on page. And I actually came to design through the lens of HTML, CSS back in the day, really, through frontend development. And then, this is why I exclusive design accessibility lies way, it’s close to my heart. And it’s the thing that many people actually really like that kind of moving into design and then, starting just getting better at design.

Vitaly: But you decided to go even further than that. I think in 2019, you transitioned from the role of a lead designer, if I’m not mistaken, to design manager. Was it something that you envisioned, that you just felt like this is a time to do that? Because again, there are two kinds of people that I encounter. Some people really go into management thinking that this is just a natural progression of their career, you cannot be just a designer, and this is in quotation marks, "forever," so you’re going to go into the managerial role. And some people feel like, let me try that and see if it’s for me and if not, I can always go back to design or maybe to another company product team and whatnot. What was it like for you? Why did you decide to take this route?

Slava: The reason was curiosity. I wouldn’t say that I was the real manager because design management is slightly different, probably even other types of management like product management, engineering management; it’s not completely management because what is required there, if you look at the reconsis, you will notice that the domain knowledge, the hard skills are essential and you’ll be checked whether you have those skills as well apart from the managerial competence. So I wouldn’t say that this kind of management is 100% true, complete management as we can imagine it in the classical meaning, it’s the combination of what you’ve been doing before with management and the higher the percentage of management is, the higher in the hierarchy you go.

Slava: In my situation, switching from the lead designer to design manager was not that crucial. I would say more critical thing that I experienced was switching from a senior designer to lead designer because this is the point where I got my first team whom I had to lead. And that was the turning point when you realize that the area of your responsibility is not only yourself and your project, but also someone else. And in modern world, we don’t have feudalism and we cannot directly tell people what to do, we are not influencing their choices directly. That’s why it’s getting harder to manage without having the real power. And we are in the civilized world, authoritarian style is not working anymore, and that’s great, but we should get inventive to work with people using gentle, mild methods, taking into account what they want as personalities, but at the same time reaching the business goals of the company and KPIs of the team.

Vitaly: Right. But then also, speaking about the gentle way of managing, I remember the talk that you have given about the thing that you have learned and some of the important things that you consider to be important in a design manager position. So I’m curious if you could share some bits of knowledge of things that you discovered maybe the hard way, which were a little bit surprising to you as you were in that role, for example, also in Bolt. What were some things that you feel many designers maybe who might be listening at this point and thinking, "Oh, actually, I was always thinking about design manager, maybe I should go there," what was some things that were surprising to you and something that were really difficult?

Slava: Something that was surprising both for me and for other people with whom I talk about design management is that we perceive management in the wrong way. We have expectations pretty far from reality. There are some managerial activities that are quite typical for designers, for the design community in general, something that we encounter so often that we tend to think that this is actually management. Maybe there is something else but not much else that we don’t see at the moment, not much is hidden of that management. And that’s why when we jump into management, we discover a lot of unknown things that this type of work includes.

Slava: For example, as a Ukrainian, I know that, in our country, many designers are self-taught designers because the profession develops much faster than the higher education. And that’s why people organize themselves into communities and pass knowledge to each other much faster and easier. And there are so many private schools and private initiatives that spread the knowledge and do that more efficiently so that after couple of months of studying, you get something. Of course, there might be many complaints about the quality of that education, but the sooner you get to the first project, the sooner you make your first mistakes, the better you learn the profession and then, you won’t repeat them again. That’s why I know the power of this community. And mentorship, knowledge-sharing is something extremely familiar to Ukrainian designers.

Slava: And then, generally, I observe the same tendency in the Western Europe that knowledge-sharing, mentorship is the usual thing that many designers do, that many designers practice. And we think that when we switch to management, we will simply scale this kind of activity. In reality, it’s just not even the largest part of management. And when people are officially promoted to managers, to leaders, they discover a lot of other areas like hiring people then being responsible for the hires because it’s not enough just to participate in a technical interview and check the hard skills of a candidate, but also then live with this decision because you cannot easily fire a person, and sometimes, it’s even wrong because as a manager you are supposed to work with this person and develop them and help them grow or help them onboard better and pass this period of adaptation. By the way, adaptation and onboarding, another thing than retention cases, resolving problems when your employees are not satisfied with what they have right now, including you as a manager and many other things like salary, compensation, bonuses, team building trust and relationship in the team, performance management, knowledge assessments.

Vitaly: Right. But then, is there even at all any time then to be designing as you’re a design manager? I know that in some teams, in some companies you have this kind of roles where, well, you’re a design manager, sometimes it would be called just... Yeah, well, hmm — sometimes design leads are actually also managers, depending if it’s like a small company or a larger company. And then, would you say that given the scope that is really changing when you’re kind of moving to management, should you have hopes that you will still have time to play with designs in Figma?

Slava: It depends on how far you go and on the org structure of the particular company. In some cases, you still have plenty of time to design because management doesn’t occupy that much time, you don’t have many subordinates or the company so small that the processes are not very formalized. In that case, yep, you can still design maybe 50% of your time, maybe even 70% of your time and manage during the rest of the time. But there are large companies where management occupies more and more time and then, yeah, probably you won’t be designing or at least designing the same way as it used to be before.

Slava: There are multiple levels of design, multiple levels of obstruction. For example, when you’re moving pixels in Figma in order to create a well-balanced button, that’s design. But when you’re creating a customer journey map or mapping a service blueprint together with stakeholders from other departments of your company, that’s design as well, but on the higher level of obstruction. You are building a bit larger picture of the product service or the whole experience throughout products and multiple services of the company. So I would say that there is always space for design, but this design might get less digital and more connected with organizational design, interaction between different departments and other stuff like that.

Vitaly: Right. So maybe if we go back a little bit into team building or specifically the culture and the way teams are built, obviously, we kind of moved, I don’t know when it was, but we kind of moved to this idea that T-shaped employees is a good thing. So you basically specialize in one thing and then, you have a pretty general understanding about what’s going on in the rest of the organization, the rest of the product and so on. It’s quite shallow, but then, in one thing, you specialize. At the same time, you see a lot of people who call themselves generalists, they kind of know a lot about different things but never really specialized deeply into one thing. And so, you also have this, this is probably considered to be not necessarily just the I shape, where you kind of get very deep in one thing, but really, this is it, you just specialized so deep that you have pretty much no solid understanding about what’s happening around.

Vitaly: And then, one thing that has been kind of discussed recently, I’ve seen at least a few articles about that is a V-shape, where you kind of have a lot of depth in one thing. You also have a pretty okay, solid, general understanding about what’s going on. But then, you also have enough skills or enough information about the adjacent knowledge within the product that you’re working on. So I’m wondering at this point, let’s say if you build a team of designers, what kind of skills or what kind of shape if you like, do we need to still remain quite, I would say, interesting to companies small and large? What kind of shape would that be? If that makes sense.

Slava: Yeah, so you want me to give you a silver bullet, right, for-

Vitaly: Yes.

Slava: ... a company?

Vitaly: Ideally, yes.

Slava: Doesn’t exist. It doesn’t exist. On the one hand, I think that’s a good discussion, discussions about the skill sets of designers, but on the other hand, we are talking a lot about ourselves, maybe, more than representatives of all the other professions about what we should call our profession, what shapes, skillset should we have, what frameworks and tools should we use? It’s extremely designer-centered. And here, of course, I can talk for hours and participate in holy wars about what’s the best name for this, all that, but essentially, at the end of the day, I realize that it doesn’t matter, it doesn’t make sense at all. Okay, whatever we decide, if you are whatever shape designer, but you are not useful in this world, you cannot reach the goal and you cannot find your niche and make users happy and business happy, then it doesn’t matter what’s written on your resume.

Vitaly: Right. So-

Slava: But then, the one hand, yeah, of course, logically, when I think about it, I do support the T-shaped concept. But again, depends on how you understand it, whether those horizontal bar of the T is about shallow knowledge or good enough knowledge or decent knowledge. You see how thick it is? And that’s why we have another concept with this We shape designer, which is essentially another representation of the T-shaped format. The idea is the same that as a human being, of course, you want to specialize in something that’s passion, that you maybe love design for and maybe that’s why you came into the profession. But at the same time, you are obliged to know to a certain minimally required extent, the whole entirety of your profession.

Slava: Ask any other professional, a surgeon, police person, whoever, financial expert, of course, they have their favorite topics, but at the same time, there is a certain requirement to you as a specialist to obtain certain amount of knowledge and skills.

Slava: The same about designers, I don’t see how we are different from other professions. It’s why it’s quite fair to have this expectation that the person would know something about UX research. They are not obliged to be as professional and advanced as specialized UX researchers, but that’s fine for a designer to know about UX research, to do some UX research. The same about UX researchers, it never hurts to know the basics of design in order to understand what your colleagues are doing and then, you collaborate better together.

Vitaly: Which brings me, of course, to the question that I think you brought up in an article, I think maybe five or six years ago. You had a lot of comments on that article. I remember that article very vividly because you argued about all the different ways of how we define design, UX, CX and all the different wordings and abbreviations, service designer, CX designer, UX designer, and so many other things.

Vitaly: I mean, it’s really interesting to me because when I look back, I realize now that we’ve been working very professionally in this industry, in whatever you want to call design industry, UX industry, digital design industry for like... What? ... three decades now, maybe even more than that, really trying to be very professional. But when we look around, actually, and this is just a funny story because just as we started trying to record this session, we spent 14 minutes trying to figure out how to do that in the application here. So what went wrong, Slava? I mean, 30 years is a long time to get some things right and I think that we have done a lot of things. But frankly, too often, when you think about general experience that people would get, be it working with public services, working with insurance companies, working with something that’s maybe less exciting than the landing page or a fancy product or SaaS, very often it’s just not good. What went wrong, Slava? Tell us.

Slava: Nothing went wrong. Everything is fine. The world is getting more and more complex over time, but something never changed, and it’s people, or we didn’t change. Our brain is more or less the same as it was thousand years ago, maybe a couple of thousand years ago and that’s the reason. We are people, we are not perfect. Technology might be amazing, it even feels magical, but we are the same. We are not perfect. We’re not always driven by rational intention to do something well. There are many people who are not very excited about their jobs, that’s why they provide not so good service. There are periods when a good person does bad job and they will improve later, but the task that they deliver today because of many reasons will be at this lower quality.

Slava: Then decision making, we are emotional beings and even if you use a hundred of frameworks about decision making and prioritizing, it doesn’t deny our nature. There are even people who learned to manipulate all the modern techniques, who learned about design thinking and workshops and try to use it to their own advantage. Like, "Oh, okay, I cannot persuade my team, so let’s do this fancy exercise with colored sticky notes and try to-

Vitaly: Well, who doesn’t like colored sticky notes, Slava, come on.

Slava: Digital colored sticky note, they’re still colored and look like sticky notes, right? And those people just want to push their own ideas through workshops. But workshops were designed for something else. The same with business, there are unethical business models still flourishing, there are dark patterns just because some people don’t care. So the reason is that we are the same, we are not perfect.

Vitaly: Right. Well-

Slava: We create design for humans, but we are humans as well.

Vitaly: But sometimes I feel like we are designing for humans, but then, at the same time, I feel that we are spending more and more time designing with AI sometimes for AI, this is how it feels to me. I don’t know about you, every now and again I still get a feeling that, okay, this message that was written by somebody and sent to me, it has a little bit of sense or feel or I don’t know, taste of ChatGPT on it. Just I can tell sometimes that this is kind of for humans, but it’s in a way appears to me as if it was written for AI. So do you have this feeling sometimes that you get that email or you get that message, it’s a little bit too AI-ish? Do you have this experience?

Slava: Sometimes I have this experience, but the reason is that it’s a hot topic right now. You may have already forgotten about another trendy topic, NFT, blockchain, everything was in blockchain, everything was NFT. But over time, people realize where the use cases are really strong and deserve our efforts and where it just doesn’t fit. It’s like with every new technology, it passes the same stages. There is even a nice diagram, the cycle of adoption of any new technology when there is a peak of excitement first when we are trying to apply it everywhere. But then, there is this drop in excitement and disillusionment after which we finally get onto the plateau of enlightenment, finding the best application for this technology.

Slava: I remember the same in the area of design methodology when design sprint just appeared, people tried applying it everywhere, even in many places where it just didn’t fit or the problem was too large or the team culture wasn’t consistent with the trust and openness implied by such a methodology as a design sprint. But over time, it found its application and now, used not that often, but only by those people who need it.

Vitaly: Right. Talking actually about team culture, maybe just to switch the topic a little bit, maybe you could bring a few red flags that you always try to watch out for. Because of course, when you are working with a diverse team and you have people who have very different backgrounds and also have very different expectations and very different skill sets, inevitably, you will face situations where team culture clashes. So I’m wondering, what do you think would be the early warning signs that the manager needs to watch out for to prevent things from exploding down the line?

Slava: That’s a good question. I would turn it into slightly different direction because I think of that kind of paradigm. I would try to prevent this from happening. The best way to deal with it is not to deal with it, to avoid dealing with it. So embracing the culture, understanding it and building it is important because then you won’t need to face the consequence. I wouldn’t say that there are real red flags because culture is like user experience, it’s like gravity, like any other physical force, it just exists. And whether you want it or not, if it’s described in a fancy culture brand guideline or not, it exists anyway. The thing is to be sincere about culture, to embrace the existing culture and to broadcast it to the outside honestly.

Slava: The problem is when the communication about the culture is different from the actual culture. There are various cultures, there are even harsh cultures that someone would find extremely uncomfortable, but for example, for other people it can be a great environment for growth, for rapid growth. Maybe they will change their environment later, but during a certain period of life, it might be important.

Slava: I remember some of my previous companies with pretty harsh cultures, but they helped me to grow and to get where I am right now. Yeah, I wasn’t stressed, but I knew about it. I expected it to happen and I had my inner readiness to resist and to learn my lessons out of that. But the problem is when the company communicates its culture externally as the paradise of wellbeing and mindfulness, but in reality they have deadlines for tomorrow and never ending flow of tasks and crazy stakeholders who demand it from you immediately and give you contradicting requirements. So that’s the problem.

Slava: Of course, yeah, there are some extreme cases when the culture is really toxic, when these are insane, inhuman conditions, I don’t deny that. But in many cases, something that we simply perceive as uncomfortable for ourselves is not necessarily evil, sometimes it is, but not always. And my message is that cultures should be honest. And for that purpose, people should be honest with themselves.

Slava: Manager should look at their company and try to formulate in simple way what type of a community this is. For example, in, again, one of my previous jobs, we realized that our team is like a university for people come to us and are hired because they want to grow rapidly, they want to grow faster than anywhere else, that’s why they join our company. They don’t get many perks and bonuses, the office is not very fancy and we are not those hipster designers who are always using trendy things. But at the same time, you get a lot of practice and you can earn the trust of a client, you can take things you want to be responsible for yourself. You are not given task, but you can take the task you find important.

Slava: And when we realized that, we included it into our value proposition because as a company you’re not even interested in attracting people who will feel unsatisfied here. If you are working this way, but your external messaging is different and you attract those people who are searching for something different and then, when they come in they’re highly disappointed and you have to separate with them in a month or a year or they will bring the elements of this culture to your culture and there is a clash of cultures.

Slava: So the point here, I’m just trying to formulate the same idea but in different ways, it’s to be honest about the culture, it’s extremely important. But also, awareness about your culture. It’s not written, it exists. And sometimes, the company principles are quite misleading, they’re not often true because the real culture is seen at the office, it’s in the Slack chat, it’s in the way how people interact, what they discuss at the coffee machine.

Vitaly: Yeah. And there are, of course, also, I think I read this really nice article maybe a couple of years ago, the idea of different subcultures and how they evolve over time and how they can actually mingle and even merge with, as you might have very different teams working on different side of the world, which then find each other and bring and merge culture. So you kind of have this moving bits and moving parts.

Vitaly: Kind of on the way to one of the conference, I went to Iceland. And there was a really nice friendly guy there who was guiding us through Iceland. And he was telling all this story about nothing ever stops, everything is moving, everything is changing, glaciers are changing, the earth’s changing, everything is changing, everything is moving. And people are pretty much like that. People always find... I mean, maybe people don’t change that much, but they’re still finding ways of collaborating better and finding ways to create something that hopefully works better within the organization. How do you encourage that though?

Vitaly: Very often I encounter situations where it feels like there are people just looking at the clock to finish on time and then, go home. And then, there are people who just want to do everything and they’re very vocal and they will have this incredible amount of enthusiasm everywhere and they will have all the GIFs in Slack and so on and so forth. But then, sometimes I feel like, again, talking about culture, their enthusiasm is clashed against this coldness that is coming from some people. And then, you have camps building. How do you deal with situations like that? You cannot just make people more similar, you just have to deal with very different people who just happen to have very different interests and priorities. How would you manage that?

Slava: That’s an amazing question, and you know why? Because there is no definite answer to it.

Vitaly: I like those kind of questions.

Slava: Yeah. It’s not easy and I struggled a lot with that. I know perfectly, based on my experience, what you’re asking about. One of the solutions might be to hire people who have similar culture or at least consistent with the existing culture. Because if your whole team or the core team, the majority in the team who set this spirit and this atmosphere, they are proactive, you shouldn’t hire people who are highly inconsistent with this kind of culture. Yeah, they might be more passive, more attentive to their schedule, but they should not be resisted at least. They can support it maybe in a more calm way, but you don’t need someone critically opposing that state of things, and vice the versa. Over time, I understood that.

Slava: Sometime ago, I thought that all designers should be proactive, rock stars, super skilled, taking responsibility about everything. But you know what? That’s quite one-sided point of view. Even if I belong to this kind of designers, it’s important to embrace other types of professionals because the downside of being such a designer is that you are driven forward by your passion, but only when you have this passion and motivation. But if it disappears, you can hardly make yourself do the simplest task. And that’s the problem because this fuel doesn’t feed you anymore.

Slava: On the other hand, those people who are more attentive to their balance between work and relaxation, people who are more attentive to their schedule and are less energetic at work and may be less passionate about what they do, they are more persistent and they can much easier survive such a situation when everything around is falling apart and many people lose motivation just because motivation is not such a strong driver for them. So over time, I understood that there are multiple types of designers and they’re all fine. The thing is to find your niche and to be in the place where you belong.

Vitaly: Right. Interesting. Because on top of that, I do have to ask a question. We could do this forever, we could keep this conversation going forever. I want to be respectful of your time as well. Just from your experience... There are so many people, the people who I’ve been speaking to over this last couple of years, but also here on the podcast, everybody has different opinions about how teams should be led and how the culture should be defined in terms of how people are working, specifically all-remote, a hundred percent remote or all on site, a hundred percent on site or hybrid with one day overlap, two days overlap, three days overlap, four days overlap.

Vitaly: What do you think works? I mean, of course, it’s a matter of the company where people allocated. And obviously, if everybody is from different parts of the world, being on site all the time, moving from, let’s say, fully remote to fully on site is just really difficult. So what would you say is really critical in any of those environments? Can hybrid work really well? Can remote work really well? Can onsite work really well? And there’s truly no best option, but I’m just wondering what should we keep in mind for each of those?

Slava: The culture. So look, culture is everything and it influences the way how people work efficiently. If is networking is really active in the team, if people communicate a lot apart from their work and tasks and everything, and if it’s normal for the team, if it’s part of the reasons why people are here in this company, then offline work is preferable. If people are more autonomous and they like it and everyone works like that in the company, then there is nothing bad in being hybrid or remote. So you see, it depends on the attitude to work and general culture, the spirit, how people feel comfortable.

Vitaly: All right. But are you saying that if you have, let’s say, a mix of people who really prefer on site and then, really prefer remote, then you kind of get an issue because how do you merge both of those intentions?

Slava: But how do you get into that situation in the first place?

Vitaly: Well, good question.

Slava: Why have you attracted so different people to your company?

Vitaly: But for the rest — with HR?

Slava: Yes, I read processes.

Vitaly: But there might be different teams and then, eventually those teams get merged and then, eventually, some people come, some people leave and people are rotating from one team to another. And then, eventually, before you know it, you end up in a situation where you’re working on a new product with a new team and then, part are remote, part are on site and part don’t even want to be there.

Slava: That’s why large companies have processes. The thing that you are describing is quite typical for huge companies because you cannot keep similar work culture forever. As you scale, it’s becoming more awake and hard to match all the time. There is an amazing diagram that I saw in LinkedIn, it was created by Julie Zhuo, who also wrote a great book on management. And this diagram shows how people are hiring, like this, A hires, B hires, C hires, D, and there is a slight difference in their cultures. And if you imagine it as the line of overlapping circles, when A hires B, B hires C, C hires D and so on, then you notice how far A is from let’s say H or G, they’re very far away because this line of hiring brought certain distortion, certain mutation into the culture understanding with each step.

Slava: It’s like evolution is working. With every century or thousands of years, certain species changes one tiny trait, but in a million of years, you won’t even recognize that. The same with huge companies, you cannot control everything and micromanage it. So naturally, they’re extremely diverse. And many companies even are proud of being diverse and inclusive, which is another aspect, which is great, but in order to manage it all, they have to introduce processes and be more strictly regulated just to keep it working.

Vitaly: Right. Right. Well, I mean, we could speak about this for hours, I think. But maybe just two more questions before we wrap up. One thing that’s really important to me and really dear to me is that I know that you’ve been mentoring and you’ve been participating in kind of educating about design also specifically for designers who are in Ukraine. And I mean, at this point, we probably have many more connections and many more insights about how design is actually working from Ukraine right now when the war is going on. I’m just wondering, do you see... Because we had a Smashing meet a couple of months ago now. And there was an incredible talk by one of the people from set up team in Ukraine, in Kyiv, and they were speaking about just incredible way of how they changed the way the company works, how they adapted in any way to accommodate for everything. Like some people working from bomb shelters. This is just incredible.

Vitaly: Those kind of stories really make me cry. So this is just unbelievable. And I always have this very, I don’t even know how to describe it, like incredible sense of the strength that everybody who I’m interacting with who is coming through after all this time. It’s been now, what? It’s like one and a half years, right, well, much more than that, actually looking at 2014.

Vitaly: So the question, I guess, that I’m trying to ask here is that strength and that kind of obsession with quality, with good work, with learning, with educating, how did it come to be and how is it now? I don’t know if it makes sense the question, but just maybe your general feelings about what designers are feeling and how are they working at this point in May 2023?

Slava: That’s a good question. Unfortunately, I might not be the best person to answer because I’ve been living in Berlin for three years and fortunately, I never experienced working from a bomb shelter, although, many of my friends and acquaintances did. But what I know for sure is that Ukrainian design community is quite peculiar and it’s an insurance trait. It’s not something that we are taught, but something that just our characteristic. I know that unlike many other people from other countries, Ukrainian designers are really hungry for knowledge and new skills. And the level of self-organization is quite high because we are not used to getting it off the shelf, we are not used to receiving it, I don’t know, from educational institutions, from the government, from whoever else.

Slava: In Ukraine, or at least definitely my generation, millennials, we understand that if we don’t do anything, we will fail in life, that’s why we try to build our career early, we think about our future work during the last years of school and at the university, already planning where we going to work, how much we going to earn and how to find your niche, your place in life.

Slava: And the same in design, we are not waiting until our universities update their programs in order to teach us digital design, we are doing it ourselves, partnering with universities, participating in different courses, contributing to those programs. And I think that this feature, this trait of Ukrainian designers is extremely helpful right now in crisis times. Maybe it didn’t get us that much by surprise, it was still unexpected. But Ukrainian designers and other professionals in other professions, they just try to always have plan B and plan C and maybe even plan D.

Vitaly: Yeah, that’s probably also explains... I mean, I have to ask this question, I really do. Why medieval themes in your UX memes? Oh, even rhymes, it must be true.

Slava: First of all, it’s beautiful and funny. The first time I used medieval art-based memes was several years ago when I worked at EPAM Systems and prepared an internal presentation for one of our internal team meetups. And it was hilarious, everyone was laughing. And since then, I just started doing it all the time. It’s not like-

Vitaly: And you have like 50 of them now or even more?

Slava: More. Many more. It’s just something original. I haven’t seen many medieval memes, especially in the educational and other materials about design and UX. So it’s just, I like to bring positive emotions to my audience. So if it’s hilarious and makes them laugh and if it’s something new that others are not doing or at least that intensively, then why not? And I simply enjoy medieval art, including architecture, gothic style, Romanesque architecture, it’s something from fairy tales or legends, but then, you realize, it was real.

Vitaly: Yeah, so I guess, dear friends listening to this, if you ever want to give or find a nice gift for Slava, lookout for medieval art and any books related to that, I think that Slava will sincerely appreciated. Now, as we’re wrapping up, and I think that you mentioned already the future at this point, I’m curious, this is a question I like asking at the end of every episode. Slava, do you have a dream project that you’d love to work on one day, a magical brand or a particularly interesting project of any industry, of any scope of any sites with any team? Do you have something in mind, what you would love to do one day? Maybe somebody from that team, from that project, from that company, from that brand is now listening.

Slava: Great question, and maybe I don’t have an amazing answer to it because it doesn’t matter. I’m dreaming about bringing value, creating something significant, but I never limited myself to a particular area or a particular company or brand, it just doesn’t matter. If it’s valuable, then it’s a success.

Vitaly: All right, well, if you, dear listener would like to hear more from Slava, you can find him on LinkedIn where he’s... Guess what? ... Slava Shestopalov, but also on Medium where he writes a lot of stuff around UX, and of course, don’t forget medieval-themed UX memes, and also, on his 5:00 AM travel blog. Slava will also be speaking in Freiburg at SmashingConf, I’m very looking forward to see you there, and maybe even tomorrow, we’ll see about that. So please, dear friends, if you have the time, please drop in at SmashingConf, Freiburg, September 2023. All right, well, thank you so much for joining us today, Slava. Do you have any parting words of wisdom that you would like to send out to the people who might be listening to this 20 years from now? Who knows?

Slava: Oh, wisdom, I’m not that wise yet, but something that I discovered recently is that we should more care about people. Technology is advancing so fast, so the thing which is left is the human factor. Maybe AI will take part of our job and that’s great because there are many routine tasks no one is fond of doing, but people, we are extremely complex and understanding who we are and how we designers as humans can serve other humans is essential. So that’s where I personally put my effort into recently, and I think that’s a great direction of research for everyone working in design, UX and related areas.

]]>
hello@smashingmagazine.com (Drew McLellan)
<![CDATA[Gatsby Headaches And How To Cure Them: i18n (Part 1)]]> https://smashingmagazine.com/2023/06/gatsby-headaches-i18n-part-1/ https://smashingmagazine.com/2023/06/gatsby-headaches-i18n-part-1/ Mon, 12 Jun 2023 16:00:00 GMT Internationalization, or i18n, is making your content understandable in other languages, regions, and cultures to reach a wider array of people. However, a more interesting question would be, “Why is i18n important?”. The answer is that we live in an era where hundreds of cultures interact with each other every day, i.e., we live in a globalized world. However, our current internet doesn’t satisfy its globalized needs.

Did you know that 60.4% of the internet is in English, but only 16.2% percent of the world speaks English?

Source: Visual Capitalist

Yes, it’s an enormous gap, and until perfect AI translators are created, the internet community must close it.

As developers, we must adapt our sites’ to support translations and formats for other countries, languages, and dialects, i.e., localize our pages. There are two main problems when implementing i18n on our sites.

  1. Storing and retrieving content.
    We will need files to store all our translations while not bloating our page’s bundle size and a way to retrieve and display the correct translation on each page.
  2. Routing content.
    Users must be redirected to a localized route with their desired language, like my-site.com/es or en.my-site.com. How are we going to create pages for each locale?

Fortunately, in the case of Gatsby and other static site generators, translations don’t bloat up the page bundle size since they are delivered as part of the static page. The rest of the problems are widely known, and there are a lot of plugins and libraries available to address them, but it can be difficult to choose one if you don’t know their purpose, what they can do, and if they are compatible with your existing codebase. That’s why in the following hands-on guide, we will see how to use several i18n plugins for Gatsby and review some others.

The Starter

Before showing what each plugin can do and how to use them, we first have to start with a base example. (You can skip this and download the starter here). For this tutorial, we will work with a site with multiple pages created from an array of data, like a blog or wiki. In my case, I choose a cooking blog that will initially have support only for English.

Start A New Project

To get started, let’s start a plain JavaScript Gatsby project without any plugins at first.

npm init gatsby
cd my-new-site

For this project, we will create pages dynamically from markdown files. To be able to read and parse them to Gatsby’s data layer, we will need to use the gatsby-source-filesystem and gatsby-transformer-remark plugins. Here you can see a more in-depth tutorial.

npm i gatsby-source-filesystem gatsby-transformer-remark

Inside our gatsby-config.js file, we will add and configure our plugins to read all the files in a specified directory.

// ./gatsby-config.js

module.exports = {
  //...
  plugins: [
    {
      resolve: `gatsby-source-filesystem`,
      options: {
        name: `content`,
        path: `${__dirname}/src/content`,
      },
    },
    `gatsby-transformer-remark`,
  ],
};

Add Your Content

As you can see, we will use a new ./src/content/ directory where we will save our posts. We will create a couple of folders with each recipe’s content in markdown files, like the following:

├── src
│ ├── content
| | ├── mac-and-cheese
| | | ├── cover.jpg
| | | ├── index.en.md
| | ├── burritos
| | | ├── cover.jpg
| | | ├── index.en.md
| | ├── pizza
| | | ├── cover.jpg
| | | ├── index.en.md
│ ├── pages
│ ├── images

Each markdown file will have the following structure:

---
slug: "mac-and-cheese"
date: "2023-01-20"
title: "How to make mac and cheese"
cover_image:
    image: "./cover.jpg"
    alt: "Macaroni and cheese"
locale: "en"
---

Step 1
Lorem ipsum...

You can see that the first part of the markdown file has a distinct structure and is surrounded by --- on both ends. This is called the frontmatter and is used to save the file’s metadata. In this case, the post’s title, date, locale, etc.

As you can see, we will be using a cover.jpg file for each post, so to parse and use the images, we will need to install the gatsby-plugin-image gatsby-plugin-sharp and gatsby-transformer-sharp plugins (I know there are a lot 😅).

npm i gatsby-plugin-image gatsby-plugin-sharp gatsby-transformer-sharp

We will also need to add them to the gatsby-config.js file.

// ./gatsby-config.js

module.exports = {
  //...
  plugins: [
    {
      resolve: `gatsby-source-filesystem`,
      options: {
        name: `content`,
        path: `${__dirname}/src/content`,
      },
    },
    `gatsby-plugin-sharp`,
    `gatsby-transformer-sharp`,
    `gatsby-transformer-remark`,
    `gatsby-plugin-image`,
  ],
};

Querying Your Content

We can finally start our development server:

npm run develop

And go to http://localhost:8000/___graphql, where we can make the following query:

query Query {
  allMarkdownRemark {
    nodes {
      frontmatter {
        slug
        title
        date
        cover_image {
          image {
            childImageSharp {
              gatsbyImageData
            }
          }
          alt
        }
      }
    }
  }
}

And get the following result:

{
  "data": {
    "allMarkdownRemark": {
      "nodes": [
        {
          "frontmatter": {
            "slug": "/mac-and-cheese",
            "title": "How to make mac and cheese",
            "date": "2023-01-20",
            "cover_image": {
              /* ... */
            }
          }
        },
        {
          "frontmatter": {
            "slug": "/burritos",
            "title": "How to make burritos",
            "date": "2023-01-20",
            "cover_image": {
              /* ... */
            }
          }
        },
        {
          "frontmatter": {
            "slug": "/pizza",
            "title": "How to make Pizza",
            "date": "2023-01-20",
            "cover_image": {
              /* ... */
            }
          }
        }
      ]
    }
  }
}

Now the data is accessible through Gatsby’s data layer, but to access it, we will need to run a query from the ./src/pages/index.js page.

Go ahead and delete all the boilerplate on the index page. Let’s add a short header for our blog and create the page query:

// src/pages/index.js

import * as React from "react";
import {graphql} from "gatsby";

const IndexPage = () => {
  return (
    <main>
      <h1>Welcome to my English cooking blog!</h1>
      <h2>Written by Juan Diego Rodríguez</h2>
    </main>
  );
};

export const indexQuery = graphql`
  query IndexQuery {
    allMarkdownRemark {
      nodes {
        frontmatter {
          slug
          title
          date
          cover_image {
            image {
              childImageSharp {
                gatsbyImageData
              }
            }
            alt
          }
        }
      }
    }
  }
`;

export default IndexPage;

Displaying Your Content

The result from the query is injected into the IndexPage component as a props property called data. From there, we can render all the recipes’ information.

// src/pages/index.js

// ...
import {RecipePreview} from "../components/RecipePreview";

const IndexPage = ({data}) => {
  const recipes = data.allMarkdownRemark.nodes;

  return (
    <main>
      <h1>Welcome to my English cooking blog!</h1>
      <h2>Written by Juan Diego Rodríguez</h2>
      {recipes.map(({frontmatter}) => {
        return <RecipePreview key={frontmatter.slug} data={frontmatter} />;
      })}
    </main>
  );
};

// ...

The RecipePreview component will be the following in a new directory: ./src/components/:

// ./src/components/RecipePreview.js

import * as React from "react";
import {Link} from "gatsby";
import {GatsbyImage, getImage} from "gatsby-plugin-image";

export const RecipePreview = ({data}) => {
  const {cover_image, title, slug} = data;
  const cover_image_data = getImage(cover_image.image.childImageSharp.gatsbyImageData);

  return (
    <Link to={/recipes/${slug}}>
      <h1>{title}</h1>
      <GatsbyImage image={cover_image_data} alt={cover_image.alt} />
    </Link>
  );
};

Creating Pages From Your Content

If we go to http://localhost:8000/, we will see all our recipes listed, but now we have to create a custom page for each recipe. We can do it using Gatsby’s File System Route API. It works by writing a GraphQL query inside the page’s filename, generating a page for each query result. In this case, we will make a new directory ./src/pages/recipes/ and create a file called {markdownRemark.frontmatter__slug}.js. This filename translates to the following query:

query MyQuery {
  allMarkdownRemark {
    nodes {
      frontmatter {
        slug
      }
    }
  }
}

And it will create a page for each recipe using its slug as the route.

Now we just have to create the post’s component to render all its data. First, we will use the following query:

query RecipeQuery {
  markdownRemark {
    frontmatter {
      slug
      title
      date
      cover_image {
        image {
          childImageSharp {
            gatsbyImageData
          }
        }
        alt
      }
    }
    html
  }
}

This will query the first markdown file available in our data layer, but to specify the markdown file needed for each page, we will need to use variables in our query. The File System Route API injects the slug in the page’s context in a property called frontmatter__slug. When a property is in the page’s context, it can be used as a query variable under a $ followed by the property name, so the slug will be available as $frontmatter__slug.

query RecipeQuery {
  query RecipeQuery($frontmatter__slug: String) {
    markdownRemark(frontmatter: {slug: {eq: $frontmatter__slug}}) {
      frontmatter {
        slug
        title
        date
        cover_image {
          image {
            childImageSharp {
              gatsbyImageData
            }
          }
          alt
        }
      }
      html
    }
  }
}

The page’s component is pretty simple. We just get the query data from the component’s props. Displaying the title and date is straightforward, and the html can be injected into a p tag. For the image, we just have to use the GatsbyImage component exposed by the gatsby-plugin-image.

// src/pages/recipes/{markdownRemark.frontmatter__slug}.js

const RecipePage = ({data}) => {
  const {html, frontmatter} = data.markdownRemark;
  const {title, cover_image, date} = frontmatter;
  const cover_image_data = getImage(cover_image.image.childImageSharp.gatsbyImageData);

  return (
    <main>
      <h1>{title}</h1>
      <p>{date}</p>
      <GatsbyImage image={cover_image_data} alt={cover_image.alt} />
      <p dangerouslySetInnerHTML={{__html: html}}></p>
    </main>
  );
};

//...

The last thing is to use the Gatsby Head API to change the page’s title to the recipe’s title. This can be easily done since the query’s data is also available in the Head component.

// src/pages/recipes/{markdownRemark.frontmatter__slug}.js

//...

export const Head = ({data}) => <title>{data.markdownRemark.frontmatter.title}</title>;

Summing all up results in the following code:

// src/pages/recipes/{markdownRemark.frontmatter__slug}.js

import * as React from "react";
import {GatsbyImage, getImage} from "gatsby-plugin-image";
import {graphql} from "gatsby";

const RecipePage = ({data}) => {
  const {html, frontmatter} = data.markdownRemark;
  const {title, cover_image, date} = frontmatter;
  const cover_image_data = getImage(cover_image.image.childImageSharp.gatsbyImageData);

  return (
    <main>
      <h1>{title}</h1>
      <p>{date}</p>
      <GatsbyImage image={cover_image_data} alt={cover_image.alt} />
      <p dangerouslySetInnerHTML={{__html: html}}></p>
    </main>
  );
};

export const recipeQuery = graphqlquery RecipeQuery($frontmatter&#95;&#95;slug: String) {
    markdownRemark(frontmatter: {slug: {eq: $frontmatter&#95;&#95;slug}}) {
      frontmatter {
        slug
        title
        date
        cover&#95;image {
          image {
            childImageSharp {
              gatsbyImageData
            }
          }
          alt
        }
      }
      html
    }
  };

export default RecipePage;

export const Head = ({data}) => <title>{data.markdownRemark.frontmatter.title}</title>;

Creating Localized Content

With all this finished, we have a functioning recipe blog in English. Now we will use each plugin to add i18n features and localize the site (for this tutorial) for Spanish speakers. But first, we will make a Spanish version for each markdown file in ./src/content/. Leaving a structure like the following:

├── src
│ ├── content
| | ├── mac-and-cheese
| | | ├── cover.jpg
| | | ├── index.en.md
| | | ├── index.es.md
| | ├── burritos
| | | ├── cover.jpg
| | | ├── index.en.md
| | | ├── index.es.md
| | ├── pizza
| | | ├── cover.jpg
| | | ├── index.en.md
| | | ├── index.es.md
│ ├── pages
│ ├── images

Inside our new Spanish markdown files, we will have the same structure in our frontmatter but translated to our new language and change the locale property in the frontmatter to es. However, it’s important to note that the slug field must be the same in each locale.

gatsby-plugin-i18n

This plugin is displayed in Gatsby’s Internationalization Guide as its first option when implementing i18n routes. The purpose of this plugin is to create localized routes by adding a language code in each page filename, so, for example, a ./src/pages/index.en.js file would result in a my-site.com/en/ route.

I strongly recommend not using this plugin. It is outdated and hasn’t been updated since 2019, so it is kind of a disappointment to see it promoted as one of the main solutions for i18n in Gatsby’s official documentation. It also breaks the File System API, so you must use another method for creating pages, like the createPages function in the Gatsby Node API. Its only real use would be to create localized routes for certain pages, but considering that you must create a file for each page and each locale, it would be impossible to manage them on even medium sites. A 20 pages site with support for five languages would need 100 files!

gatsby-theme-i18n

Another plugin for implementing localized routes is gatsby-theme-i18n, which will be pretty easy to use in our prior example.

We will first need to install the gatsby-theme-i18n plugin and the gatsby-plugin-react-helmet and react-helmet plugins to help add useful language metadata in our <head> tag.

npm install gatsby-theme-i18n gatsby-plugin-react-helmet react-helmet

Next, we can add it to the gatsby-config.js:

// ./gatsby-config.js

module.exports = {
  //...
  plugins: [
    //other plugins ...
    {
      resolve: `gatsby-theme-i18n`,
      options: {
        defaultLang: `en`,
        prefixDefault: true,
        configPath: require.resolve(`./i18n/config.json`),
      },
    },
  ],
};

As you can see, the plugin configPath points to a JSON file. This file will have all the information necessary to add each locale. We will create it in a new ./i18n/ directory at the root of our project:

[
  {
    "code": "en",
    "hrefLang": "en-US",
    "name": "English",
    "localName": "English",
    "langDir": "ltr",
    "dateFormat": "MM/DD/YYYY"
  },

  {
    "code": "es",
    "hrefLang": "es-ES",
    "name": "Spanish",
    "localName": "Español",
    "langDir": "ltr",
    "dateFormat": "DD.MM.YYYY"
  }
]

Note: To see changes in the gatsby-config.js file, we will need to restart the development server.

And just as simple as that, we added i18n routes to all our pages. Let’s head to http://localhost:8000/es/ or http://localhost:8000/en/ to see the result.

Querying Localized Content

At first glance, you will see a big problem: the Spanish and English pages have all the posts from both locales because we aren’t filtering the recipes for a specific locale, so we get all the available recipes. We can solve this by once again adding variables to our GraphQL queries. The gatsby-theme-i18n injects the current locale into the page’s context, making it available to use as a query variable under the $locale name.

index page query:

query IndexQuery($locale: String) {
  allMarkdownRemark(filter: {frontmatter: {locale: {eq: $locale}}}) {
    nodes {
      frontmatter {
        slug
        title
        date
        cover_image {
          image {
            childImageSharp {
              gatsbyImageData
            }
          }
          alt
        }
      }
    }
  }
}

{markdownRemark.frontmatter__slug}.js page query:

query RecipeQuery($frontmatter__slug: String, $locale: String) {
  markdownRemark(frontmatter: {slug: {eq: $frontmatter__slug}, locale: {eq: $locale}}) {
    frontmatter {
      slug
      title
      date
      cover_image {
        image {
          childImageSharp {
            gatsbyImageData
          }
        }
        alt
      }
    }
    html
  }
}

Localizing Links

You will also notice that all Gatsby links are broken since they point to the non-localized routes instead of the new routes, so they will direct the user to a 404 page. To solve this, gatsby-theme-i18n exposes a LocalizedLink component that works exactly like Gatsby’s Link but points to the current locale. We just have to switch each Link component for a LocalizedLink.

// ./src/components/RecipePreview.js

+ import {LocalizedLink as Link} from "gatsby-theme-i18n";
- import {Link} from "gatsby";

//...

Changing Locales

Another vital feature to add will be a component to change from one locale to another. However, making a language selector isn’t completely straightforward. First, we will need to know the current page’s path, like /en/recipes/pizza, to extract the recipes/pizza part and add the desired locale, getting /es/recipes/pizza.

To access the page’s location information (URL, HREF, path, and so on) in all our components, we will need to use the wrapPageElement function available in the gatsby-browser.js and gatsby-ssr.js files. In short, this function lets you access the props used on each page, including a location object. We can set up a context provider with the location information and pass it down to all components.

First, we will create the location context in a new directory: ./src/context/.

// ./src/context/LocationContext.js

import * as React from "react";
import {createContext} from "react";

export const LocationContext = createContext();

export const LocationProvider = ({location, children}) => {
  return <LocationContext.Provider value={location}>{children}</LocationContext.Provider>;
};

As you can imagine, we will pass the page’s location object to the provider’s location attribute on each Gatsby file:

// ./gatsby-ssr.js & ./gatsby-browser.js

import * as React from "react";
import {LocationProvider} from "./src/context/LocationContext";

export const wrapPageElement = ({element, props}) => {
  const {location} = props;

  return <LocationProvider location={location}>{element}</LocationProvider>;
};

Note: Since we just created the gatsby-ssr.js and gatsby-browser.js files, we will need to restart the development server.

Now the page’s location is available in all components through context, and we can use it in our language selector. We have also to pass down the current locale to all components, and the gatsby-theme-i18n exposes a useful useLocalization hook that let you access the current locale and the i18n config. However, a caveat is that it can’t get the current locale on Gatsby files like gatsby-browser.js and gatsby-ssr.js, only the i18n config.

Ideally, we would want to render our language selector using wrapPageElement so it is available on all pages, but we can’t use the useLocazication hook. Fortunately, the wrapPageElement props argument also exposes the page’s context and, inside, its current locale.

Let’s create another context to pass down the locale:

// ./src/context/LocaleContext.js

import * as React from "react";
import {createContext} from "react";

export const LocaleContext = createContext();

export const LocaleProvider = ({locale, children}) => {
  return <LocaleContext.Provider value={locale}>{children}</LocaleContext.Provider>;
};

And use it in our wrapPageElement function:

// ./gatsby-ssr.js & ./gatsby-browser.js

import * as React from "react";
import {LocationProvider} from "./src/context/LocationContext";
import {LocaleProvider} from "./src/context/LocaleContext";

export const wrapPageElement = ({element, props}) => {
  const {location} = props;
  const {locale} = element.props.pageContext;

  return (
    <LocationProvider location={location}>
      <LocaleProvider locale={locale}>{element}</LocaleProvider>
    </LocationProvider>
  );
};

The last thing is how to remove the locale (es or en) from the path (/es/recipes/pizza). Using the following simple but ugly regex, we can remove all the /en/ and /es/ at the beginning of the path:

/(\/e(s|n)|)(\/*|)/

It’s important to note that the regex pattern only works for the en and es combination of locales.

Now we have to create our LanguageSelector.js:

// ./src/components/LanguageSelector

import * as React from "react";
import {useContext} from "react";
import {useLocalization} from "gatsby-theme-i18n";
import {Link} from "gatsby";
import {LocationContext} from "../context/LocationContext";
import {LocaleContext} from "../context/LocaleContext";

export const LanguageSelector = () => {
  const {config} = useLocalization();
  const locale = useContext(LocaleContext);
  const {pathname} = useContext(LocationContext);

  const removeLocalePath = /(\/e(s|n)|)(\/*|)/;
  const pathnameWithoutLocale = pathname.replace(removeLocalePath, "");

  return (
    <div>
      {config.map(({code, localName}) => {
        return (
          code !== locale && (
            <Link key={code} to={`/${code}/${pathnameWithoutLocale}`}>
              {localName}
            </Link>
          )
        );
      })}
    </div>
  );
};

Let’s break down what is happening:

  1. Get our i18n config through the useLocalization hook.
  2. Get the current locale through context.
  3. Get the page’s current pathname through context, which is the part that comes after the domain (like /en/recipes/pizza).
  4. We remove the locale part of the pathname using a regex pattern (leaving just recipes/pizza).
  5. We want to render a link for each available locale except the current one, so we will check if the locale is the same as the page before rendering a common Gatsby Link to the desired locale.

Now inside our gatsby-ssr.js and gatsby-browser.js files, we can add our LanguageSelector:

// ./gatsby-ssr.js & ./gatsby-browser.js

import * as React from "react";
import {LocationProvider} from "./src/context/LocationContext";
import {LocaleProvider} from "./src/context/LocaleContext";
import {LanguageSelector} from "./src/components/LanguageSelector";

export const wrapPageElement = ({element, props}) => {
  const {location} = props;
  const {locale} = element.props.pageContext;

  return (
    <LocationProvider location={location}>
      <LocaleProvider locale={locale}>
        <LanguageSelector />
        {element}
      </LocaleProvider>
    </LocationProvider>
  );
};

Redirecting Users

The last detail to address is that now the non-i18n routes like http://localhost:8000/ or http://localhost:8000/recipes/pizza are empty. To solve this, we can redirect the user to their desired locale using Gatsby’s redirect in gatsby-node.js.

// ./gatsby-node.js

exports.createPages = async ({actions}) => {
  const {createRedirect} = actions;

  createRedirect({
    fromPath: `/*`,
    toPath: `/en/*`,
    isPermanent: true,
  });

  createRedirect({
    fromPath: `/*`,
    toPath: `/es/*`,
    isPermanent: true,
    conditions: {
      language: [`es`],
    },
  });
};

Note: Redirects only work in production! Not in the local development server.

With this, each page that doesn’t start with the English or Spanish locale will be redirected to a localized route. The wildcard * at the end of each route says it will redirect them to the same path, e.g., it will redirect /recipes/mac-and-cheese/ to /en/recipes/mac-and-cheese/. Also, it will check for the specified language in the request’s origin and redirect to the locale if available; else, it will default to English.

react-intl

react-intl is an internationalization library for any React app that can be used with Gatsby without any extra configuration. It provides a component to handle translations and many more to format numbers, dates, times, etc. Like the following:

  • FormattedNumber,
  • FormattedDate,
  • FormattedTime.

It works by adding a provider called IntlProvider to pass down the current locale to all the react-intl components. Among others, the provider takes three main attributes:

  • message
    An object with all your translations.
  • locale
    The current page’s locale.
  • defaultLocale
    The default page’s locale.

So, for example:

  <IntlProvider messages={{}} locale="es" defaultLocale="en" >
      <FormattedNumber value={15000} />
      <br />
      <FormattedDate value={Date.now()} />
      <br />
      <FormattedTime value={Date.now()} />
      <br />
  </IntlProvider>,

Will format the given values to Spanish and render:

15.000

23/1/2023

19:40

But if the locale attribute in IntlProvider was en, it would format the values to English and render:

15,000

1/23/2023

7:42 PM

Pretty cool and simple!

Using react-intl With Gatsby

To showcase how the react-intl works with Gatsby, we will continue from our prior example using gatsby-theme-i18n.

We first will need to install the react-intl package:

npm i react-intl

Secondly, we have to write our translations, and in this case, we just have to translate the title and subtitle on the index.js page. To do so, we will create a file called messajes.js in the ./i18n/ directory:

// ./i18n/messages.js

export const messages = {
  en: {
    index_page_title: "Welcome to my English cooking blog!",
    index_page_subtitle: "Written by Juan Diego Rodríguez",
  },
  es: {
    index_page_title: "¡Bienvenidos a mi blog de cocina en español!",
    index_page_subtitle: "Escrito por Juan Diego Rodríguez",
  },
};

Next, we have to set up the IntlProvider in the gatsby-ssr.js and gatsby-browser.js files:

// ./gatsby-ssr.js & ./gatsby-browser.js

import * as React from "react";
import {LocationProvider} from "./src/context/LocationContext";
import {LocaleProvider} from "./src/context/LocaleContext";
import {IntlProvider} from "react-intl";
import {LanguageSelector} from "./src/components/LanguageSelector";
import {messages} from "./i18n/messages";

export const wrapPageElement = ({element, props}) => {
  const {location} = props;
  const {locale} = element.props.pageContext;

  return (
    <LocationProvider location={location}>
      <LocaleProvider locale={locale}>
        <IntlProvider messages={messages[locale]} locale={locale} defaultLocale="en">
          <LanguageSelector />
          {element}
        </IntlProvider>
      </LocaleProvider>
    </LocationProvider>
  );
};

And use the FormattedMessage component with an id attribute holding the desired translation key name.

// ./src/pages/index.js

// ...
import {FormattedMessage} from "react-intl";

const IndexPage = ({data}) => {
  const recipes = data.allMarkdownRemark.nodes;

  return (
    <main>
      <h1>
        <FormattedMessage id="index_page_title" />
      </h1>
      <h2>
        <FormattedMessage id="index_page_subtitle" />
      </h2>
      {recipes.map(({frontmatter}) => {
        return <RecipePreview key={frontmatter.slug} data={frontmatter} />;
      })}
    </main>
  );
};

// ...

And as simple as that, our translations will be applied depending on the current user’s locale. However, i18n isn’t only translating all the text to other languages but also adapting to the way numbers, dates, currency, and so on are formatted in the user’s regions. In our example, we can format the date on each recipe page to be formatted according to the current locale using the FormattedDate component.

// ./src/pages/recipes/{markdownRemark.frontmatter__slug}.js

//...
import {FormattedDate} from "react-intl";

const RecipePage = ({data}) => {
  const {html, frontmatter} = data.markdownRemark;
  const {title, cover_image, date} = frontmatter;
  const cover_image_data = getImage(cover_image.image.childImageSharp.gatsbyImageData);

  return (
    <main>
      <h1>{title}</h1>
      <FormattedDate value={date} year="numeric" month="long" day="2-digit" />
      <GatsbyImage image={cover_image_data} alt={cover_image.alt} />
      <p dangerouslySetInnerHTML={{__html: html}}></p>
    </main>
  );
};

//...

As you can see, we feed the component the raw date and specify how we want to display it. Then the component will automatically format it to the correct locale. And with the year, month, and day attributes, we can further customize how to display our date. In our example, the date 19-01-2023 will be formatted the following way:

English: January 19, 2023

Spanish: 19 de enero de 2023

If we want to add a localized string around the date, we can use react-intl arguments. Arguments are a way to add dynamic data inside our react-intl messages. It works by adding curly braces {} inside a message.

The arguments follow this pattern { key, type, format }, in which

  • key is the data to be formatted;
  • type specifies if the key is going to be a number, date, time, and so on;
  • format further specifies the format, e.g., if a date is going to be written like 10/05/2023 or October 5, 2023.

In our case, we will name our key postedOn, and it will be a date type in a long format:

// ./i18n/messages.js

export const messages = {
  en: {
    // ...
    recipe_post_date: "Written on {postedOn, date, long}",
  },
  es: {
    // ...
    recipe_post_date: "Escrito el {postedOn, date, long}",
  },
};
// ./src/pages/recipes/{markdownRemark.frontmatter__slug}.js

//...
import {FormattedMessage} from "react-intl";

const RecipePage = ({data}) => {
  const {html, frontmatter} = data.markdownRemark;
  const {title, cover_image, date} = frontmatter;
  const cover_image_data = getImage(cover_image.image.childImageSharp.gatsbyImageData);

  return (
    <main>
      <h1>{title}</h1>
      <FormattedMessage id="recipe_post_date" values={{postedOn: new Date(date)}} />
      <GatsbyImage image={cover_image_data} alt={cover_image.alt} />
      <p dangerouslySetInnerHTML={{__html: html}}></p>
    </main>
  );
};
//...

Note: For the date to work, we will need to create a new Date object with our date as its only argument.

Localizing The Page’s Title

The last thing you may have noticed is that the index page’s title isn’t localized. In the recipes pages’ case, this isn’t a problem since it queries the already localized title for each post, but the index page title doesn’t. Solving this can be tricky for two reasons:

  1. You can’t use Gatsby Head API directly with react-intl since the IntlProvider doesn’t exist for components created inside the Head API.
  2. You can’t use the FormattedMessage component inside the title tag since it only allows a simple string value, not a component.

However, there is a workaround for both problems:

  1. We can use react-helmet (which we installed with gatsby-theme-i18n) inside the page component where the IntlProvider is available.
  2. We can use react-intl imperative API to get the messages as strings instead of the FormmatedMessage component. In this case, the imperative API exposes a useIntl hook which returns an intl object, and the intl.messages property holds all our messages too.

So the index component would end up like this:

// ./src/pages/index.js

// ...
import {FormattedMessage, useIntl} from "react-intl";
import {Helmet} from "react-helmet";

const IndexPage = ({data}) => {
  const intl = useIntl();

  const recipes = data.allMarkdownRemark.nodes;

  return (
    <main>
      <Helmet>
        <title>{intl.messages.index_page_title}</title>
      </Helmet>
      <h1>
        <FormattedMessage id="index_page_title" />
      </h1>
      <h2>
        <FormattedMessage id="index_page_subtitle" />
      </h2>
      {recipes.map(({frontmatter}) => {
        return <RecipePreview key={frontmatter.slug} data={frontmatter} />;
      })}
    </main>
  );
};

// ...
react-i18next

react-i18next is a well-established library for adding i18n to our react sites, and it brings the same and more features, hooks, and utils of react-intl. However, a crucial difference is that to set up react-i18next, we will need to create a wrapper plugin in gatsby-node.js while you can use react-intl as soon as you install it, so I believe it’s a better option to use with Gatsby. However, there already are plugins to set up faster the react-i18next library like gatsby-plugin-react-i18next and gatsby-theme-i18n-react-i18next.

Conclusion

The current state of Gatsby and especially its plugin is precarious, and each year its popularity goes lower, so it’s important to know how to handle it and which plugins to use if you want to work with Gatsby. Despite all, I still believe Gatsby is a powerful tool and is still worth starting a new project with npm init gatsby.

I hope you found this guide useful and leave with a better grasp of i18n in Gatsby and with less of a headache. In the next article, we will explore an in-depth solution to i18n by creating your own i18n plugin!

]]>
hello@smashingmagazine.com (Juan Diego Rodríguez)
<![CDATA[Design Under Constraints: Challenges, Opportunities, And Practical Strategies]]> https://smashingmagazine.com/2023/06/design-constraints-challenges-opportunities-practical-strategies/ https://smashingmagazine.com/2023/06/design-constraints-challenges-opportunities-practical-strategies/ Fri, 09 Jun 2023 10:00:00 GMT “If you don’t want to work within constraints, become an artist.” That is what one of my design lecturers told me when I was at university back when the web wasn’t even a thing.

That has turned out to be one of the most useful pieces of advice I ever received in my career and has led me to embrace and even enjoy working within constraints, which probably explains why I tend to specialize in highly regulated sectors with enormous amounts of stakeholders and legacy.

So, if you find working within constraints challenging, this is the post for you. In it, I hope to change your attitude towards constraints and provide practical ways of dealing with even the most frustrating barriers.

But let’s begin by looking at the kind of constraints you could find yourself facing.

Constraints On Every Side

The constraints we face come in all shapes and sizes, from technical constraints due to legacy technology or backwards compatibility to legal constraints relating to compliance requirements or accessibility.

Then there can be inadequate availability of images, video, and text or simply a lack of access to stakeholders.

However, the biggest two, without a doubt, are a lack of time and a lack of resources (either money or people). In fact, it is rare to encounter a project where somebody is not in a hurry, and you have enough resources to do the job properly!

It is easy to let all of these obstacles demoralize you, but I would encourage you to embrace, rather than resist, their constraints.

Why You Should Embrace Your Constraints

Constraints are not a set of necessary evils we have to endure. Instead, they are the core of what shapes the work we do.

  • Constraints provide a clear set of guidelines and limitations, which can help focus the design process and prevent scope creep.
  • Constraints help to build trust with clients or stakeholders, as they can see that the designer is able to work within their limitations and still deliver a high-quality product.
  • But most importantly of all, constraints can lead to more creative and innovative solutions, as designers are forced to think creatively within the given limitations.

I have done some of my best work over the years precisely because of the constraints placed upon me, not despite them.

Also, some constraints are a good idea. Ensuring a site is accessible just makes sense, as does limiting the time and money an organization is willing to invest.

Not that you should blindly accept every constraint placed upon you.

Know When To Push Back Against Constraints

Unsurprisingly, I would encourage you to challenge constraints that are based on incorrect assumptions or outdated information. However, you won’t come across those that frequently.

More common are constraints that make sense from “a certain point of view.” However, these kinds of constraints are not always right within the context of the project and its long-term objectives.

For example, attempting to deliver a project within a strict budget and on an aggressive schedule may reduce the cost to the organization. But it will substantially increase the risk of the project failing, and so ultimately, the money and time that were spent will be wasted.

Another common example is compliance constraints. These constraints exist to protect the organization from possible risk, but many larger organizations become so risk-averse that they undermine their competitiveness in the market. They swap one type of risk for another.

The key in these situations is to demonstrate the cost of any constraint placed upon you.

Demonstrating The Cost Of An Unhealthy Constraint

Often, those who impose constraints upon you do not see the problems these constraints create. This is usually because they are only thinking in terms of their own area of responsibility. For example, a compliance officer is only going to be thinking about compliance and not the broader user experience. Equally, the IT department is going to be more focused on security and privacy than conversion or usability.

Ultimately the decision of whether to enforce a constraint or not comes down to balancing multiple factors. Therefore, what you need to do is

Demonstrate the cost associated with a constraint so that senior management (who take a more holistic view) has all of the facts to make a final decision.

You can demonstrate the cost in one of three ways. You can either focus on the damage that a constraint causes, the cost of not taking an action the constraint prevents, or the lost opportunities imposed by the constraint.

Let’s look at each to help you see more clearly how this can work.

Highlight The Hidden Damage Of A Constraint

I once worked for a consumer electronics company. One of their biggest sellers was a kettle that included a water filter which prevented limescale build-up (I know, I work on the most exciting projects!)

The company insisted that when somebody added the kettle to their shopping cart, we should automatically add a set of water filters as well.

This is a well-known dark pattern that damages the user experience, but I also knew that it was increasing the average order value, a key metric the e-commerce team tracked.

To combat this constraint, I knew I had to demonstrate that it was causing damage that the e-commerce team and leadership were unaware of. So, I took the following steps:

  • I gathered evidence on social media of users complaining about this issue.
  • I contacted the customer support team to get some metrics about the number of complaints.
  • I contacted the returns team to find out how many people returned the filters.
  • I looked on review sites to see the number of negative reviews relating to filters.

Sure enough, I found substantial evidence that this was a major issue among consumers. But I didn’t stop there. I wanted to associate a financial cost with the decision, so I made some estimates:

  • I made my best guess at the cost of combating the negative reviews, referencing various sources I found online.
  • I researched the average cost of dealing with a complaint and combined it with the data from the customer services team to guess the overall cost of dealing with filter complaints.
  • I used a similar approach to work out an approximate cost of processing returned filters.

Now, let me be clear, these were nothing more than guesses on my part. My figures were not accurate, and people in the company were quick to challenge them. But associating a dollar value with the problem got their attention!

I agreed that my figures were probably wildly off and suggested we did some proper research to find out the real cost.

You don’t need hard data to demonstrate there is a problem. An educated guess is good enough to start a discussion.

Of course, not all constraints are actively causing damage. Some are merely preventing some better action from being taken. In these cases, you need a different approach.

Focus On The Cost Of Inaction

Over time, an organization establishes processes and procedures that have been proven to work for them. The bigger the organization, the more standard operating procedures they have, and the more constraints you encounter.

Well-established companies become so afraid of losing their position that they become extremely risk-averse, and so place considerable constraints on any project.

People succeed in organizations like this by doing what has been done before. This can be problematic for those of us who work in digital because most of what we are trying to do is new.

To combat this bias towards the status quo, we need to demonstrate the cost of inaction. Put another way, we need to show management that if they do not do things differently, it will threaten the market position the organization has established.

In most cases, the best approach is to focus on the competition. Do a bit of research and show that the competition is less risk-averse and gaining market share as a result. Keep mentioning how they are doing things differently and how that threatens your organization’s market position.

Another tactic is to demonstrate how customer expectations have changed and that if the company does not act, they will begin to lose market share.

This is particularly easy to do because users’ expectations regarding digital have skyrocketed in recent years.

“The last best experience that anyone has anywhere becomes the minimum expectation for the experiences they want everywhere.”
— Bridget van Kranlingen, Senior Vice President of IBM Global Markets

Put another way, users are comparing your organization’s subpar digital experience to the very best of what they are interacting with online, even when that comparison is not fair.

A bit of user research goes a long way in this regard. For example, consider running a system usability scale survey to compare your digital platforms to this industry benchmark. Alternatively, run a survey asking how important the digital experience is to customers.

While fear of losing market share is a big motivator to well-established businesses, younger, hungrier businesses tend to be more motivated by lost opportunities.

Demonstrate Lost Opportunities

Your management, stakeholders, and colleagues often do not realize what they are missing out on because of the constraints they place upon you. It, therefore, falls to you to demonstrate those opportunities.

Sometimes, you can make this case with analytics. For example, recently, I was working with a client who insisted on having a pricing page on their website, despite the fact the page showed no pricing! Instead, the page had a request pricing form.

They wanted to keep the page because they were afraid to lose the handful of leads that came via the page. However, I was able to convince them otherwise by pointing out that the page was actively alienating the majority of users who visited it, effectively losing them leads.

I did this by demonstrating the page had a higher bounce rate than any other page on the site, was the most common exit page, and had the lowest dwell time.

But analytics is not my favorite approach for demonstrating lost opportunities. Instead, I typically turn to prototyping.

Prototyping is a great way of demonstrating exactly what an organization will miss out on if they insist on unreasonable constraints, presuming, that is, that you create a prototype that is free from those constraints.

I use this approach all the time. Imagine, for example, that you have been told that a particular technology stack imposes a set of restrictive constraints on how an interface is designed. By prototyping what the interface could be if you were free from those constraints, you can make a powerful case for changing the technology stack.

Having a prototype gives you something to test against. You can use usability testing to provide hard evidence of how much it improves the user experience, findability, and even conversion.

Even more significantly, a prototype will excite internal stakeholders. If your prototype is compelling enough, they will want that solution, and that changes the conversation.

Instead of you having to justify why the IT stack needs to be changed, now the IT team has to justify why their IT stack cannot accommodate your solution. Stakeholders and management will want to know why they cannot have what they have fallen in love with.

Of course, people will not always fall in love with your prototype, and ultimately, many of your attempts to overcome constraints will fail despite your best efforts, and you need to accept that.

Conceding Defeat With Grace

Let’s be clear. It is your job to demonstrate to management or clients that a constraint placed upon you is unhealthy. They cannot be expected to know instinctively. They do not have your perspective on the project and so cannot see what you see.

This means that if they fail to remove the constraint you consider unhealthy, it is your failing to demonstrate the problem, not their fault.

Sure, you might consider them shortsighted or naive. But ultimately, you failed to make your case.

Also, it is important to note that you don’t always have the whole picture. A decision may be bad from a user experience perspective, for example, but it may be the right thing for the business. There will always be other factors at play that you are unaware of.

So when you fail to make your case, accept that with grace and do your best to work within the constraints given to you.

Ultimately your working relationship with management, colleagues, and clients is more important than your professional pride and getting your way.

]]>
hello@smashingmagazine.com (Paul Boag)
<![CDATA[Testing Sites And Apps With Blind Users: A Cheat Sheet]]> https://smashingmagazine.com/2023/06/testing-sites-apps-blind-users-cheat-sheet/ https://smashingmagazine.com/2023/06/testing-sites-apps-blind-users-cheat-sheet/ Wed, 07 Jun 2023 10:00:00 GMT This article focuses on the users of screen readers — special software that converts the source code of a site or app into speech. Usually, these are people with low vision and blindness but not only. They’ll help you discover most accessibility issues. Of course, the topic is too vast for a single article, but this might help to get started.

Table Of Contents Part 1. What Is Accessibility Testing?

1.1. Testing vs. Audit

There are many ways of evaluating the accessibility of a digital product, but let’s start with distinguishing two major approaches.

Auditing is an element-by-element comparison of a site or app against a list of accessibility requirements, be it a universal standard (WCAG) or a country-specific law (like ADA in the U.S. or AODA in Ontario, Canada). There are two ways to do an audit:

  1. Automated audit
    Checking accessibility by means of web apps, plugins for design and coding software, or browser extensions (for example, axe DevTools, ARC Toolkit, WAVE, Stark, and others). These tools generate a report with issues and recommendations.
  2. Expert audit
    Evaluation of web accessibility by a professional who knows the requirements. This person can employ assistive technology and have a disability, but this is anyway an expert with advanced knowledge, not a “common user.” As a result, you get a report too, but it’s more contextual and sensible.

Testing, unlike auditing, cannot be done by one person. It involves users of assistive technologies and comprises a set of one-on-one sessions facilitated by a designer, UX researcher, or another professional.

Today we’ll focus on testing as an undervalued yet powerful method.

1.2. Usability vs. Accessibility Testing

You might have already heard about usability testing or even tried it. No wonder it’s the top research method among designers. So how is it different from its accessibility counterpart?

Common features:

  • Script
    In both cases, a facilitator prepares a full written script with an introduction, questions, and tasks based on a realistic scenario (for example, buying a ticket or ordering a taxi). By the way, here are handy testing script templates.
  • Insights gathering
    Despite accessibility testing’s main focus, it also reveals lots of usability issues, simply said, whether a site or app is easy to use. In both cases, a facilitator should ask follow-up questions to get an insight into people’s way of thinking, pain points, and needs.
  • Format
    Both testing types can be organized online or offline. Usually, one session takes from 30 minutes to 1 hour.

Key differences:

  • Participants selection
    People for usability testing are recruited mainly by demographic characteristics: job title, gender, country, professional experience, etc. When you test accessibility, you take into account the senses and assistive technologies involved in using a product.
  • What you can test
    In usability testing, you can test a live product, an interactive prototype (made in Figma, Protopie, Framer, etc.), or even a static mockup. Accessibility testing, in most cases, requires a live product; prototyping tools cannot deliver a source code compatible with assistive technology. Figma attempted to make prototypes accessible, but it’s still far from perfect.
  • Giving hints
    When participants get stuck in the flow, you help them find the way out. But when you involve people with disabilities, you have to understand how their assistive gear works. Just to give you an example, a phrase like “Click on the red cross icon in the corner” will sound silly to a blind user.

1.3. Why Opt For Testing?

Now that you know the difference between an audit and testing and the distinction between usability and accessibility testing, let’s clarify why testing is so powerful. There are two reasons:

  1. Get valuable insights.
    The idea of testing is to learn how you can improve the product. While you won’t check all interface elements and edge cases, such sessions show if the whole flow works and if people can reach the goal. Unlike even the most comprehensive audits, testing is much closer to reality and based on the usage of real assistive technology by a person with a disability.
  2. Build empathy through storytelling.
    A good story is more compelling than bare numbers. Besides, it can serve as a helpful addition to such popular pro-accessibility arguments as legal risks, winning new customers, or brand impact. Even 1–2 thorough sessions can give you enough material for a vivid story to excite the team about accessibility. An audit report alone may not be as thrilling to read.

Testing gives you more realistic insights into common scenarios. Laws and standards aren’t perfect, and formal compliance might not cover all the user challenges. Sometimes people take not the “designed” path to the goal but the one that seems safer or more intuitive, and testing reveals it.

Of course, auditing is still a powerful method; however, its combination with testing will show much more accurate results. Now, let’s talk about accessibility testing in detail.

Part 2. Recruiting Users

There are many types of disabilities and, consequently, various assistive technologies that help people browse the web. Without a deep dive into theory, let’s just recap the variety of disabilities:

  • Depending on the senses involved or the affected area of life: visual (blindness, color deficiency, low vision), physical (cerebral palsy, amputation, arthritis), cognitive (dyslexia, Down syndrome, autism), auditory (deafness, hearing loss), and so on.
  • By severity: permanent (for example, an amputated leg or some innate condition), temporary (a broken arm or, let’s say, blurred vision right after using eye drops), and situational (for instance, a noisy room or carrying a child).

Note: You can find more information on various types of disabilities on the Microsoft Inclusive Design hub.

For the sake of simplicity, we’ll focus on the case applicable to most digital products: when a site or app mostly relies on vision. In this case, visual assistive technologies offer users an alternative way to work with content online. The most common technologies are:

  • Screen readers: software that converts text into speech and has numerous handy shortcuts to navigate efficiently. (We’ll talk about it in detail in the next chapters.)
  • Refreshable Braille displays: devices able to show a line of tactile Braile-script text. Round-tipped pins are raised through holes in a surface and refresh as a user moves their cursor on the screen. Such displays are vital for blind-deaf people.
  • Virtual assistants (Amazon Alexa, Apple Siri, Google Assistant, and others): an excellent example of universal design that serves the needs of both people with disabilities and non-disabled people. Assistants interpret human speech and respond via synthesized voices.
  • High-contrast displays or special modes: for people with low vision. Some users combine a high-contrast mode with a screen reader.

2.1. Who To Involve

Debates around an optimal number of testing participants are never-ending. But we are talking here about a particular case — organizing accessibility testing for the first time, hence the recommendation is the following:

  • Invite 3–6 users with blindness and low vision who either browse the web by means of screen readers or use a special mode (for example, extra zoom or increased contrast).
  • If your product has rich data visualization (charts, graphs, dashboards, or maps), involve several people with color blindness.

In any case, it’s better to conduct even one or two high-quality sessions than a dozen of poorly prepared ones.

2.2. Where To Find People

It is not as hard to find people for testing as it seems at first glance. If you are working on a mass product for thousands of users, participants won’t need any special knowledge apart from proficiency with their assistive technology. Here are three sources we recommend checking:

  • Specialized platforms for recruiting users according to your parameters (for example, Access Works or UserTesting). This method is the fastest but not the cheapest one because platforms take their commission on top of user compensation.
  • Social media communities of people with disabilities. Try searching by the keywords like “people with disabilities,” “PWD,” “support group,” “visually impaired,” “partially sighted,” or “blind people.” Ask the admin’s permission to post your research announcement, and it won’t be rejected.
  • Social enterprises and non-profits that work in the area of inclusion, employment, and support for people with disabilities (for example, Inclusive IT in Ukraine or The Federation of the Blind and Partially Sighted in Germany). Drop them an email with your request.

We noticed that the last two points might sound like getting participants for free, but not everyone has an opportunity to volunteer.

When we organized accessibility testing sessions last year, three persons agreed to take part pro bono because it was a university course, and we didn’t get any profits. Otherwise, be ready to compensate for the participant’s time (in my experience, around €15–30). It can be an Amazon gift card or coupon for something useful in a particular country (only ensure it’s accessible).

Digital product companies that test accessibility regularly hire people with disabilities so that they have access to in-progress software and can check it iteratively before the official launch.

Part 3. Preparing For The Session

Now that you’ve recruited participants, it’s time to discuss things to prepare before the sessions. And the first question is:

3.1. Online Or offline?

There are basically two ways to conduct testing sessions: remotely or face-to-face. While we usually prefer the first one, both techniques have pros and cons, so let’s talk about them.

Benefits of online:

  • Native environment.
    Participants can use familiar home equipment, like a desktop computer or laptop, with nicely tuned assistive technology (plugins, modes, settings).
  • Cost and time efficiency.
    No need to reimburse expenses for traveling to your office. It might be quite costly if a participant arrives with an accompanying person or needs special accessible transport.
  • Easier recruitment.
    It’s more likely you’ll find a participant that meets your criteria around the world instead of searching in your city (and again, zero travel expenses).

Benefits of offline:

  • Testing products in development.
    If you have a product that isn’t public yet, participants won’t be able to easily install it or open it in a browser. So, you’ll have to invite participants to your office, but they should probably bring the portable version of their assistive technology (for example, on a USB drive).
  • Testing mobile apps.
    If a person brings a personal phone, you’ll see not only the interaction with your product but also how the device is set up and what gestures and shortcuts a person uses.
  • Helping inexperienced users.
    Using assistive technology is a skill, and you may involve someone who is not yet proficient with it. So, the offline setting is more convenient when participants get stuck and you help them find the way out.

As you can see, online testing has more universal advantages, whereas the offline format rather suits niche cases.

3.2. Communication Tools

Once you decide to test online, a logical question is what tool to choose for the session. Basically, there are two options:

Specialized testing tools (for instance, UserTesting, Lookback, UserZoom, Hotjar, Useberry):

  • Apart from basic conferencing functionality, they support advanced note-taking, automatic transcription, click heatmaps, dashboards with testing results, and other features.
  • They are quite costly. Besides, trial versions may be too limited for even a single real session.
  • Participants may get stuck with an unfamiliar tool that they’ve never used before.

Popular video conferencing tools (for example, Google Meet, Zoom, Microsoft Teams, Skype, Webex):

  • Support all the minimally required functionality, such as video calls, screen-sharing, and call recording.
  • They are usually free.
  • There is a high chance that participants know how to use them. (Note: even in this case, people may still experience trouble launching screen-sharing).

Since we are talking about your first accessibility testing, it’s much safer and easier to utilize an old good video conferencing tool, namely the one that your participants have experience with. For example, when we organized educational testing sessions for the Ukrainian Catholic University, we used Skype, and at the HTW University in Berlin, we chose Zoom.

Regardless of the tool choice, learn in advance how screen-sharing works in it. You’ll likely need to explain it to some of the participants using suitable (non-visual) language. As a result, the intro to accessibility testing sessions may take longer compared to usability testing.

3.3. Tasks

As we figured out before, accessibility testing requires a working piece of software (let’s say, an alpha or beta version); it’s harder to build, but it opens vast research opportunities. Instead of asking a participant to imagine something, you can actually observe them ordering a pizza, booking a ticket, or filling in a web form.

Recommendations for accessibility testing tasks aren’t much different from the ones in usability testing. Tasks should be real-life and formulated in a way people naturally think. Instead of referring to an interface (what button a person is supposed to click), you should describe a situation that could happen in reality.

Start a session with a mini-interview to learn about participants’ relevant experiences. For example, if you are going to test an air travel service, ask people if they travel frequently and what their desired destinations are. Based on these details, customize the tasks — booking a ticket to the place of the participant’s choice, not a generic location suggested by you.

Examples of realistic, broad tasks:

  • Testing a consumer product: bicycle online store.
    You want to buy a gift card for your colleague George who enjoys bikepacking. Choose the card value, customize other preferences, and select how George will receive the gift. (This task implies that you learned about a real George who likes cycling during a mini-interview.)
  • Testing a professional product: customer support tool.
    Your manager asked you to take a look at several critical issues that haven’t been answered for a week. Find those tickets and find out how to react to them. (This task implies that you invited a participant who worked as a customer support agent or in a similar role.)

Examples of leading UI-based tasks:

  • Consumer product
    “Open the main menu and find the ‘Other’ category. Choose a €50 gift card. In the ‘For whom’ input field enter ‘John Doe’… Select ‘Visa/Mastercard’ as a paying method…”
  • Professional product
    “Navigate to the dashboard. Choose the ‘Last week’ option in the ‘Status’ filter and look at the list of tickets. Apply the filter ‘Sort by date’ and tell me what the top-most item is…”

A testing session is 50% preparation and 50% human conversation. It’s not enough to give even a well-formulated task and silently wait.

An initial task reveals which of the ways to accomplish a task a participant will choose as the most intuitive one. When a person gets stuck, you can give hints, but they shouldn’t sound like “click XYZ button”; instead, let them explore further. Something like the following:

— No worries. So, the search doesn’t give the expected result. What else can you do here?
— Hmm, I don’t know. Maybe filtering it somehow…
— OK, please try that.

3.4. Wording

Your communication style impacts participants’ way of thinking and the level of bias. Even a huge article won’t cover all the nitty-gritty, but here are several frequent mistakes.

Beware of the following:

  • Leading tasks: “Go to the ‘Dashboard’ section and find the frequency chart” or “Scroll to the bottom to see advanced options.”
    Such hints totally ruin the session, and you will never know how a person would act in reality.
  • Selling language: “Check our purchase in one click” or “Try the ‘Smart filtering’ feature.”
    It makes people feel as if they have to praise your product, not share what they really think.
  • Humorous tasks: “Create a profile for Johnny Cash” or, for example, “Request Christmas tree delivery to Lapland.”
    Jokes distract participants and decrease session realism.
  • IT terminology: “On the dashboard, find toggle switch” or “Go to the block with dropdowns and radio buttons.”
    It’s bad for two reasons: you may confuse people with words they don’t understand; it can be a sign that you give leading tasks and excessive UI hints.

Here is recommended further reading by Nielsen Norman Group:

Part 4. Session Facilitation

As agreed before, your first accessibility testing session will probably involve a blind person or a person with low vision who uses a screen reader to browse the web. So, let’s cover the two main aspects you have to know before starting a session.

4.1. Screen Readers

A screen reader is an assistive software that transforms visual information (text and images) into speech. When a visually impaired person navigates through a site or app using a keyboard or touchscreen, the software “reads” the text and other elements out loud.

Screen readers rely on the source code but interpret it in a special way. They skip code accountable for visual effects (like colors or fonts) and take into account meaningful parts, such as heading tags, text descriptions for pictures, and labels of interactive elements (whether it’s a button, input field, or checkbox). The better a code is written, the easier it will be for users to comprehend the content.

Now that you know how screen readers function, it’s time to experience them firsthand. Depending on the operating system, you’ll have a standard embedded screen reader already available on your device:

  • VoiceOver: Mac and iOS;
  • Narrator: Windows;
  • TalkBack: Android.

During one of our training courses, we learned from blind users that the screen reader on iPhone is more comfortable and flexible than the Android one. Interestingly, people don’t like standard desktop screen readers either on Mac or on Windows and usually install one of the advanced third-party readers, for instance:

  • JAWS (Job Access With Speech): Windows, paid, the most popular screen reader worldwide;
  • NVDA (Non-Visual Desktop Access): Windows, free of charge.

4.2. Navigation

Visually impaired people usually navigate apps and sites using a keyboard or touchscreen. And while sighted people scan a page and jump from one part to another, screen reader users can keep only one element in focus at a time, be it a paragraph of text or, let’s say, an input field.

Participants of your accessibility testing will likely run into an unpassable obstacle at some point in the session, and you’ll give them hints on how to find the way out and proceed with the next task. In this case, you’ll need a special non-visual language that makes sense.

Not helpful hints:

  • “Click the cross icon in the upper right corner.”
  • “Scroll to the bottom of the modal window and find the button there.”
  • “Look at the table in the center of the page.”

Helpful hints:

  • “Please, navigate to the next/previous item.”
  • “Go to the second element in the list.”
  • “Select the last heading/link/button.”

Note: UI hints above are suggested for cases when a user is completely stuck in the flow and cannot proceed, for example, when an element is not navigable via a keyboard or, let’s say, an interactive element doesn’t have a proper label or name.

Summary

Once all the testing sessions have been completed, you can analyze the collected feedback, determine priorities, and develop an action plan. This process could be the subject of a separate guideline, but let’s cover the three key principles right away:

  • Catching information
    Testing produces tons of data, so you should be prepared to capture it; otherwise, it will be lost or obscured by your imperfect human memory. Don’t rely on a recording. Make notes in the process or ask an assistant to do that. Notes are easier to analyze and find repeating observations across sessions. Besides, they ensure you’ll have data if the recording fails.
  • Raw datainsights
    Not everything you observe in testing sessions should be perceived as a call to action. Raw data shows what happened, while insights explain reasons, motivations, and ways of thinking. For example, you see that people use search instead of filters, but the insight may be that typing a search request needs less effort than going through the filter menu.
  • Criticality and impact
    Not all observations are significant. If five users struggle to proceed because the shopping cart isn’t keyboard-navigable, it’s a major barrier both for them and the business. But if one out of five participants didn’t like the button name, it isn’t critical. Take into account the following:
    • How many participants encountered a problem;
    • How much a problem impacts reaching the goal: booking a ticket, ordering pizza, or sending a document.

Once the information has been collected and processed, it is essential to share it with the team: designers, engineers, product managers, quality assurance folks, and so on. The more interactive it will be, the better. Let people participate in the discussion, ask questions, and see what it means for their area of responsibility.

As you gain more experience in conducting testing sessions, invite team members to watch the live stream (for instance, via Google Meet) or broadcast the session to a meeting room with observers, but make sure they stay silent and don’t intrude.

Further Reading

]]>
hello@smashingmagazine.com (Slava Shestopalov & Eugene Shykiriavyi)
<![CDATA[Exploring Universal And Cognitive-Friendly UX Design Through Pivot Tables And Grids]]> https://smashingmagazine.com/2023/06/universal-cognitive-friendly-ux-design-tables-grids/ https://smashingmagazine.com/2023/06/universal-cognitive-friendly-ux-design-tables-grids/ Tue, 06 Jun 2023 16:30:00 GMT Tables are one of the most popular ways to visualize data. Presenting data in tables is so ubiquitous — and core to the web itself — that I doubt many of you reading this have any trouble with the basics of the <table> element in HTML. But building a good complex table isn’t an easy task.

Though, I’d even go so far as to say that tables are an integral part of our daily life.

That’s why we need to start thinking about making tables more inclusive. The web is supposed to be designed for everybody. That includes those with impairments that may prevent access to the information in the tables we make and rely on assistive technology to “read” them.

For the last several months, I’ve been working on this scientific project around inclusive design for people with cognitive disorders for my university degree. I’ve mostly focused on developing guidelines to help educational platforms adapt to such users.

I also work for a company that has developed a JavaScript library for creating pivot tables used for business analysis and data visualizations. At one point in my research, I found that tables are a type of popular data representation that can simultaneously be a lifesaver and a troublemaker, yes, for people with learning and cognitive problems, but for everyone else as well. Remember, we are all temporarily “abled” and prone to lose abilities like eyesight and hearing over time.

Plus, a well-executed inclusive table design is a pathway to improving everyone’s productivity and overall experience, regardless of impairment.

Table Of Contents What We Mean By Cognitive Disorders

Cognitive disorders are defined as any kind of disorder that significantly impairs an individual’s conscious intellectual activity, such as thinking, reasoning, or remembering.

ADHD is one example that prevents a person from remaining focused or paying attention. There’s also dyslexia , which makes it tough to recognize and comprehend written words. Dyscalculia is specific to working with numbers and arithmetic.

For those without this condition, it is difficult to understand what exactly can be wrong with the perception of written information. But based on the descriptions of people with the relevant deviations, simulators were created that imitate what people with dyslexia see.

Currently, you can even install a special browser extension to estimate how difficult your site will be to perceive by people with this deviation. It is much more difficult to understand the condition of people with ADHD, but certain videos with ADHD simulations do exist, which can also allow you to evaluate the level of difficulty in the perception of any information by such people.

These are all things that can make it difficult for people to use tables on the web. Tables are capable of containing lots of information that requires a high level of cognitive work.

  • The first stage toward helping users with such deviations is to understand their condition and feel its details on themselves — in other words, practicing empathy.
  • The second stage is to systematize the details and identify specific usability problems to solve.

Please indulge me as we dive a bit into some psychological theory that is important to understand when designing web pages.

Cognitive Load

Cognitive load relates to the amount of information that working memory can hold at one time. Our memory has a limited capacity, and instructional methods should avoid overloading it with unnecessary activities and information that competes with what the individual needs to complete their task.

UX professionals often say complex tasks that require the use of external resources may result in an increased cognitive load. But the amount of the load can be affected by any additional information, unusual design, or wrong type of data visualization. When a person is accustomed to a particular representation of certain types of data — like preferred date format or where form input labels are positioned — even a seemingly minor change increases the processing time of our brain.

Here’s an example: If a particular student is from a region where content is presented in a right-to-left direction and the software they are provided by their university only supports a left-to-right direction, the amount of mental work it takes to comprehend the information will be greater compared to other students.

If you still want another example, Anne Gibson explains this exceptionally well in a blog post that uses ducks to illustrate the idea.

Cognitive Biases

I also want to call special attention to cognitive biases, which are systematic errors in thinking that become patterns of deviation from rationality in judgment. When people are processing and interpreting information around them, it often can influence the decisions a person makes without even noticing.

For example, the peak-end rule says that people judge an experience by its "peak” and last interactions. It’s easy to prove. Try to reflect on a game you used to play as a kid, whether it’s from an arcade, a computer console, or something you played online. What do you remember about it? Probably the level that was hardest for you and the ending. That’s the“peak” of your experience and the last, most “fresh” one, and they create your overall opinion of the game. For more examples, there is a fantastic resource that outlines 106 different types of cognitive biases and how they affect UX.

Signal-to-noise Ratio

Last but not least, I’d like to touch on the concept of a signal-to-noise ratio briefly. It is similar to the engineering term but relates to the concept that most of the information we encounter is noise that has nothing to do with a user’s task.

  • Relevant and necessary information is a signal.
  • The ratio is the proportion of relevant information to irrelevant information.

A designer’s goal is to achieve a high signal-to-noise ratio because it increases the efficiency of how information is transmitted. The information applied to this ratio can be anything: text, illustrations, cards, tables, and more.

The main idea about cognitive disabilities I want you to take away is that they make individuals very sensitive to the way the information is presented. A font that’s too small or too bright will make content unperceivable. Adding gratuitous sound or animation may result in awful distractions (or worse) instead of nice enhancements.

I’ll repeat it:

A good user experience will prevent cognitive overload for everyone. It’s just that we have to remember that many out there are more sensitive to such noises and loads.

Focusing on individuals with specific considerations only gives you a more detailed view of what you need to solve for everybody to live a simpler life.

Considering Cognitive Disorders In UX Design

Now that we have defined the main problems that can arise in a design, I can sum up our goals for effective UX:

  • Reduce the cognitive load.
  • Maximize the signal-to-noise ratio.
  • Use correct cognitive biases to boost the user experience.

“Design” is a loaded term meaning lots of different things, from colors and fonts to animations and sounds and everything in between. All of that impacts the way an individual understands the information that is presented to them. This does not mean all design elements should be excluded when designing table elements. A good table design is invisible. The design should serve content, not the other way around.

With the help of lots of academic, professional, and personal research, I’ve developed a set of recommendations that I believe will result in cognitive-friendly and easy-to-perceive table designs.

Color Palettes And Usage

We should start by talking about the color because if the colors used in a table are improperly implemented, subsequent decisions do not really matter.

Many people consider colors to have their meanings, which differ from culture to culture. That’s certainly true in a sociological sense, but as far as UX is concerned, the outcome is the same — colors carry information and emotions and are often unnecessary to mean something in a design.

Rule 1: Aim For A Minimalist Color Palette

When you see a generous use of color in a table, it isn’t to make the table more functional but to make a design stand out. I won’t say that using fewer colors guarantees a more functional table, but more color tends to result in individuals losing attention from the right things.

Accordingly, bright colors and accents should highlight information that has established meaning. This isn’t to say that interesting color schemes and advanced color palettes are off-limits. This means using colors wisely. They are a means to an end rather than a splash of paint for attention.

Adam Wathan & Steve Schoger offer a perfect example of color usage in a design study of customized Slack themes. Consider the two following interfaces. It may not seem like it at first, but the second UI actually has a more extensive color palette than the first.

The difference is that the second interface applies shades of the core color defined in the palette and that brighter and more vibrant shades are only used to highlight the important stuff.

You can explore this phenomenon by yourself and test your perception of the colors in the design by changing the look of your messenger. For example, Telegram has some interface customization options, and while playing with that, I noticed I read and navigate between my chats in the “Night Accent” mode rather than the plain “System” mode.

Of course, both designs were designed for people with different preferences and characteristics, but such a personal experiment led me to the following thought. Even though the second option uses fewer colors, the uniformity of information is a bit confusing. From this, I concluded that too few colors and too minimal a design is also a bad choice. It is necessary to find a balance between the color palette and its usage.

The best option is to pick from one to three primary colors and then play with their shades, tints, and tones. To combine the colors wisely, you can use complementary, split complementary, or analogous approaches.

That said, I suggest using a “shading” monochromatic approach for tables. It means defining a base color in a palette, then expanding it with different shades in dark and light directions. In other words:

  1. Choose a primary color.
  2. Define an evenly darker and lighter shade of that primary color.

This produces two more colors to which you can apply the previous technic to create colors that are a perfect compromise between the shades on either side. Repeat this process until you reach the number of colors you need (generally, 7–9 shades will do).

Rule 2: Embrace The Power Of Whitespace

I find that it’s good to offer a fair amount of “breathing room” around elements rather than trying to crowd everything in as close as possible. For example, finding a balance of space between the table rows and columns enhances the legibility of the contents as it helps distinguish the UI from the information.

I’ll qualify this by noting that “breathing room” often depends on the type of data that’s being presented, as well as the size of the device on which it’s being viewed. As such, it sometimes makes sense to enhance a table’s functionality by allowing the user to adjust the height and width of rows and columns for the most optimal experience.

If you are worried about using too few or too many colors, apply the 60/30/10 rule. It’s a basic pattern for any distribution selection. People use this strategy when budgeting assets like content and media, and it’s applicable to design. The rule says the color usage should be distributed as follows:

  • 60% for neutral colors,
  • 30% for primary colors,
  • 10% for secondary colors (e.g., highlights, CTAs, and alerts).

Rule 3: Avoid Grays

Talking about neutral colors, in color theory, gray represents neutrality and balance. Its color meaning likely comes from being the shade between white and black and often is also perceived as the absence of color. You can not overdo it; its light shades do not oppress, so gray is just “okay.”

However, gray does carry some negative connotations, particularly when it comes to depression and loss. Its absence of color makes it dull. For this reason, designers often resort to it to de-emphasize an element or certain bits of data.

But maintaining such a philosophy of gray color will only work in black and white designs, such as on the Apple website. Though, as I mentioned before, it actually works really well as grey is the tone of black or a shade of white.

The problem, however, comes up when other colors are added to the color palette, which leads to a change in a color’s roles and functions. In the case of gray, putting it next to brighter colors makes the design pale and dull.

Having no color of its own, gray seems to eat away the brightness of neighboring elements. Instead of maintaining balance, gray makes the design cloudy and unclear. After all, against the background of already illuminated elements, gray makes the elements not just less significant but unnecessary for our perception.

That does not mean you should totally give up gray. But highlighting some information inherently de-emphasizes other information, negating the need for gray in the first place.

The easy way out is to replace gray with lighter shades of a palette’s base color on a table cell’s background. The effect is the same, but the overall appearance will pop more without adding more noise or cognitive load.

Rule 4: Know What’s Worthy Of Highlighting

Designers are always looking for a way to make their work stand out. I get the temptation because bold and bright colors are definitely exciting and interesting.

Blogs can be considered a good example of this problem as their variety is wide and growing, and a lot of platforms prioritize exclusive design over inclusive design.

For example, Medium uses only black and shades of it for a color palette, which significantly facilitates even simple tasks like reading titles. Hackernoon, although looking interesting and drawing attention, requires more concentration and does not allow you to “breathe” as freely as on Medium.

In analytical software, that only leads to a table design that emphasizes a designer’s needs ahead of the user’s needs.

Don’t get me wrong — a palette that focuses on shades rather than a large array of exciting colors can still be exciting and interesting. That provides a discussion about which grid elements benefit from color. Here are my criteria for helping decide what those are and the colors that add the most benefit for the given situation.

Active cells: If the user clicks on a specific table cell or selects a group of cells, we can add focus to it to indicate the user’s place in the data. The color needs to call attention to the selection without becoming a distraction, perhaps by changing the border color with a base color and using a light shade of it for the background so as to maintain WCAG-compliant contrast with the text color.

Tip! It’s also good to highlight the row and column that a focused cell belongs to, as this information is a common thing to check when deciphering the cell’s meaning. You can highlight the entire row and column it belongs to or, even better, just the first cell of the row and column.

Error messaging: Error messages definitely benefit from color because, in general, errors contain critical feedback for the user to take corrective action.

A good example might be an injected alert that informs the user that the table’s functionality is disabled until an invalid data point is fixed. Reds, oranges, and yellows are commonly used in these situations but bear in mind that overly emphasizing an error can lead to panic and stress. (Speaking of error messaging, Vitaly Friedman has an extensive piece on designing effective error messages, including the pitfalls of relying too heavily on color.)

Outstanding data: I’m referring to any data in the table that is an outlier. For example, in a table that compares data points over time, we might want to highlight the high and low points for the sake of comparison. I suggest avoiding reds and greens, as they are commonly used to indicate success and failure. Perhaps styling the text color with a darker shade of a palette’s base color is all you need to call enough attention to these points without the user losing track of them.

The key takeaway:

Data-heavy tables are already overwhelming, and we don’t want any additional noise. Try to remove all unnecessary colors that add to a user’s cognitive load.

Tip! Remember the main goal when designing a table: reliability, not beauty. Always check your final decisions, ideally with a variety of target users. I really recommend using contrast checkers to spot mistakes quickly and efficiently correct them.

Typographical Considerations

The fonts we use to represent tabular data are another aspect of a table’s look and feel that we need to address when it comes to implementing an inclusive design. We want the data to be as legible and scannable as possible, and I’ve found that the best advice boils down to the typography of the content — especially for numerics — as well as how it is aligned.

Rule 1: The Best Font Is A “Simple” Font

The trick with fonts is the same as with colors: simplicity. The most effective font is one that takes less brainpower to interpret rather than one that tries to stand out.

No, you don’t need to ditch your Google Fonts or any other font library you already use, but choose a font from it that meets these recommendations:

  1. Sans-serif fonts (e.g., Helvetica, Arial, and Verdana) are more effective because they tend to take up less space in a dense area — perfect for promoting more “breathing room” in a crowded table of data.
  2. A large x-height is always easier to read. The X-height is the height of the body of a lowercase letter minus any ascenders or descenders. In other words, the height of the lowercase “x” in the font.
  3. Monospace fonts make it easier to compare cells because the width of each character is consistent, resulting in evenly-spaced lines and cells.
  4. Regular font weights are preferable to bolder weights because the boldfacing text is another form of highlighting or emphasizing content, which can lead to confusion.
  5. A stable, open counter. The counter is a space in the letter “o” or the letter “b.” Fonts with distorted counters render poorly in small sizes and are hard to read.

Fonts that fulfill these criteria are more legible and versatile than others and should help whittle down the number of fonts you have to choose from when choosing your table design.

Rule 2: Number Formatting Matters

When choosing a font, designers often focus on good legible letters and forget about numbers. Needless to say, numbers often are what we’re displaying in tables. They deserve first-class consideration when it comes to choosing an effective font for a good table experience.

As I mentioned earlier, monospace fonts are an effective option when numbers are a table’s primary content. The characters take up the same width per character for consistent spacing to help align values between rows and columns. In my experience, finding a proportional font that doesn’t produce a narrow “1” is difficult.

If you compare the two fonts in the figure above, it’s pretty clear that data is easier to read and compare when the content is aligned and the characters use the same amount of space. There’s less distance for the eye to travel between data points and less of a difference in appearance to consider whether one value is greater than the other.

If you are dealing with fractions, you will want to consider a font that supports that format or go with a variable font that supports font-variant-numeric features for more control over the spacing.

Rule 3: There Are Only Two Table Alignments: Left And Right

Technically there are four alignments: left, right, center, and justify. We know that because the CSS text-alignment property supports all four of them.

My personal advice is to avoid using center alignment, except in less-common situations where unambiguous data is presented with consistently-sized icons. But that’s a significant and rare exception to the rule, and it is best to use caution and good judgment if you have to go there.

Justified content alters the spacing between characters to achieve a consistent line length, but that’s another one to avoid, as the goal is less about line lengths than it is about maintaining a consistent amount of space between characters for a quick scan. That is what monospaced fonts are effective for.

Data should instead be aligned toward the left or right, and which one is based on the user’s language preference.

Then again, at school, we’re taught to compare numbers in a right-to-left direction by looking first at single units, then tens, followed by hundreds, then thousands, and so forth. Accordingly, the right alignment could be a better choice that’s universally easier to read regardless of a person’s language preference. You may notice that spreadsheet apps like Excel, Sheets, and Notion align numeric values to the right by default.

There are exceptions to that rule, of course, because not all numbers are measurements. There are qualitative numbers that probably make more sense with left alignment since that is often the context in which they are used. They aren’t used for comparison and are perceived as a piece of text information written in numbers. Examples include:

  • Dates (e.g., 12/28/2050),
  • Zip/Postal code (e.g., 90815),
  • Phone number (e.g., 555-544-4349).

Table headings should be aligned to the same edge as the data presented in the column. I know there could be disagreement here, as the default UA styling for modern browsers centers table headings.

The screenshots above are examples of bad and good headers. When looking at the first screenshot, your initial focus is likely drawn to the column headers, which is good! That allows you to understand what the table is about quickly. But after that initial focus, the bold text is distracting and tricks your brain into thinking the header is the most important content.

The header in the second screenshot also uses bold text. However, notice how changing the color from black to white emphasizes the headers at the same time. That negates the impact bolding has, preventing potential cognitive load.

At this point, I should include a reminder to avoid gray when de-emphasizing table elements. For example, notice the numbers in the far left column and very top row. They get lost against the background color of the cells and even further obscured by the intense background color of surrounding cells. There’s no need to de-emphasize what is already de-emphasized.

I also suggest using short labels to prevent them from competing with the data. For example, instead of a heading that reads “Grand Total of Annual Revenue,” try something like “Total Revenue” or “Grand Total” instead.

Table Layout Considerations

There once was a time when tables were used to create webpage layouts because, again, it was a simple and understandable way to present the information in the absence of standardized CSS layout features. That’s not the case today, thankfully, but that period taught us a lot about best practices when working with table design that we can use today.

Rule 1: Fewer Borders = More White Space

Borders are commonly used to distinguish one element from one another. In tables, specifically, they might be used to form outlines around rows and columns. That distinction is great but faces the same challenge that we’ve covered with using color: too much of a layout can steal focus from the data, making the design busy and cluttered. With the proper design and text alignment, however, borders can become unnecessary.

Borders help us navigate the table and delimit individual records. At the same time, if there are many of them in a grid, it becomes a problem in large tables with a lot of rows and columns. To prevent the cells from being too densely connected, try adding more space between them with padding. As I have mentioned before, negative space is not an enemy but a design saver.

That said, the law of diminishing returns applies to how much space there should be, particularly when considering a table’s width. For example, a table might not need to flex to the full width of its parent container by default. It depends on the content, of course. Avoiding large spacing between columns will help prevent a reader’s eyes from having to travel far distances when scanning data and making mistakes.

I know that many front-enders struggle with column widths. Should they be even? Should they only be as wide as the content that’s in them? It’s a juggling act that, in my mind, is not worth the effort. Some cells will always be either too wide or too narrow when table cells contain data points that result in varying line lengths. Embrace that unevenness, allowing columns to take up a reasonable amount of space they need to present the data and scale down to as little as they need without being so narrow that words and numbers start breaking lines.

Lines should be kept to a minimum. Add them if adjusting the alignment, joining cells, and increased spacing is not enough to indicate the direction — or keep them as light as possible.

Allow multi-line wrapping when you really need it, such as when working with longer data points with just enough room around them to indicate the alignment direction. But if you caught yourself thinking of using multi-line wrapping in a grid, then first of all, analyze whether there is a more practical way to visualize the data.

Rule 2: Stylish rows, stylish columns

When deciding how to style a table’s rows, it’s important to understand the purpose of the table you are developing. Reducing visual noise will help to present a clear picture of the data on smaller datasets but not for large datasets.

It’s easy for a user to lose their place when scrolling through a table that contains hundreds or thousands of rows. This is where borders can help a great deal, as well as zebra striping, for a visual cue that helps anchor a user’s eyes enough to hold focus on a spot while scanning.

Speaking of zebra striping, it’s often used as a stylistic treatment rather than a functional enhancement. Being mindful of which colors are used for the striping and how they interact with other colors and shades used for highlighting information will go a long way toward maintaining a good user experience that avoids overwhelming color combinations. I often use a slightly darker shade of the table’s default background color on alternating rows (or columns) when establishing stripes. If that’s white, then I will go with the lightest shade of my palette’s base color. The same choice should be made while maintaining the borders — they should be marked but remain invisible.

Typically, row density gravitates around 40px-56px with a minimum padding of 16px on both the right and left edges of each column.

Feature Enhancements

Tables are often thought of as static containers for holding data, but we’ve all interacted with tables that do lots of other things, like filtering and reordering.

Whatever features are added to a table, it’s important to let users customize the table themselves based on their preferences. Then the user experience you create can become even better by conforming to the user’s comfort level. As with everything else, there is a line. Smaller datasets may not need the same enhancements for filtering data that large datasets do, for example, because they may wind up causing more confusion than convenience and raise the threshold for understanding the data.

In addition to the ability to customize a table’s elements, such as colors, fonts, conditional formatting, value formatting, and cell sizing, there are a few questions you can ask to help determine the enhancements a table might need for a better experience.

Could A User Lose Context When Scrolling?

We’ve already discussed how a table with hundreds of rows or columns can lead to many user scrolling and cognitive errors. Striping is one way to help users remain focused on a particular spot, but what if there’s so much scrolling that the table’s headers are no longer available?

If that’s a possibility, and the headers are important for establishing the context of the presented data, then you might consider sticky positioning on the headers so they are always available for reference. Chris Coyier has a nice demo that implements sticky headers and a sticky first column.

Who Can Have Problems Using My Design? (Accessibility Support)

Of all the points, this is the most difficult to implement, but at the same time, in our context, it is the most important. People with diagnosed abnormalities and disorders have a much stronger impact on their work process due to their condition. Therefore, supporting an additional — and optional — accessibility mode is necessary. Each element must be adapted for screen readers, navigable via keyboard, and contain the most semantic markup possible. This will help people who use assistive technology without a loss in performance.

Conclusion

Thanks for letting me share my best practices for presenting tabular data on the web. It’s amazing how something as seemingly simple as a table element can quickly grow in scope when we start considering user needs and enhancements to include as many of those needs as possible.

We discussed a great number of things that get in the way of an inclusive table design, including our own cognitive biases and design choices. At the same time, we covered strategies for tackling those obstacles from a wide range of considerations, from design choices all the way to determining possible features for enhancing a user’s experience when interacting with the table and the data it contains.

There can be a lot of headwork that goes into a table implementation, but not everything in this article has to be considered for every situation. A lot of the advice I’ve shared — like so many other things on the web — simply depends on the specific case. That’s why we spent a good amount of time defining the goals for an effective table experience:

  • Reduce the cognitive load.
  • Maximize the signal-to-noise ratio.
  • Use correct cognitive biases to boost the user experience.

But if you only take one thing away from this, I’d say it is this: in data analytics data > than everything else. Keeping that idea in mind throughout the development process prevents spoiling your design with frivolous designs and features that work against our goals.

Further Reading on Smashing Magazine

]]>
hello@smashingmagazine.com (Yuliia Nikitina)
<![CDATA[Primitive Objects In JavaScript: When To Use Them (Part 2)]]> https://smashingmagazine.com/2023/06/primitive-objects-javascript-part-2/ https://smashingmagazine.com/2023/06/primitive-objects-javascript-part-2/ Mon, 05 Jun 2023 10:00:00 GMT Writing programs in JavaScript is approachable at the beginning. The language is forgiving, and you get accustomed to its affordances. With time and experience working on complex projects, you start to appreciate things like control and precision in the development flow.

Another thing you might start to appreciate is predictability, but that’s way less of a guarantee in JavaScript. While primitive values are predictive enough, objects aren’t. When you get an object as an input, you need to check for everything:

  • Is it an object?
  • Does it have that property you’re looking for?
  • When a property holds undefined, is that its value, or is the property itself missing?

It’s understandable if this level of uncertainty leaves you slightly paranoid in the sense that you start to question all of your choices. Subsequently, your code becomes defensive. You think more about whether you’ve handled all the faulty cases or not (chances are you have not). And in the end, your program is mostly a collection of checks rather than bringing real value to the project.

By making objects primitive, many of the potential failure points are moved to a single place — the one where objects are initialized. If you can make sure that your objects are initialized with a certain set of properties and those properties hold certain values, you don’t have to check for things like the existence of properties anywhere else in your program. You could guarantee that undefined is a value if you need to.

Let’s look at one of the ways we can make primitive objects. It’s not the only way or even the most interesting one. Rather, its purpose is to demonstrate that working with read-only objects doesn’t have to be cumbersome or difficult.

Note: I also recommend you to check the first part of the series, where I covered some aspects of JavaScript that help bring objects closer to primitive values, which in return allows us to benefit from common language features that aren’t usually associated with an object, like comparisons and arithmetic operators.

Making Primitive Objects In Bulk

The most simple, most primitive (pun intended) way to create a primitive object is the following:

const my_object = Object.freeze({});

This single line results in an object that can represent anything. For instance, you could implement a tabbed interface using an empty object for each tab.

import React, { useState } from "react";

const summary_tab = Object.freeze({});
const details_tab = Object.freeze({});

function TabbedContainer({ summary_children, details_children }) {
    const [ active, setActive ] = useState(summary_tab);

    return (
        <div className="tabbed-container">
            <div className="tabs">
                <label
                    className={active === summary_tab ? "active" : ""}
                    onClick={() => {
                        setActive(summary_tab);
                    }}
                >
                    Summary
                </label>
                <label
                    className={active === details_tab ? "active": ""}
                    onClick={() => {
                        setActive(details_tab);
                    }}
                >
                    Details
                </label>
            </div>
            <div className="tabbed-content">
                {active === summary_tab && summary_children}
                {active === details_tab && details_children}
            </div>
        </div>
    );
}

export default TabbedContainer;

If you’re like me, that tabs element just screams to be reworked. Looking closely, you’ll notice that tab elements are similar and need two things, such as an object reference and a label string. Let’s include the label property in the tabs objects and move the objects themselves into an array. And since we’re not planning to change tabs in any way, let’s also make that array read-only while we’re at it.

const tab_kinds = Object.freeze([
    Object.freeze({ label: "Summary" }),
    Object.freeze({ label: "Details" })
]);

That does what we need, but it is verbose. The approach we’ll look at now is often used to hide repeating operations to reduce the code to just the data. That way, it is more apparent when the data is incorrect. What we also want is to freeze objects (including the array) by default rather than it being something we have to remember to type out. For the same reason, the fact that we have to specify a property name every time leaves room for errors, like typos.

To easily and consistently initialize arrays of primitive objects, I use a populate function. I don’t actually have a single function that does the job. I usually create one every time based on what I need at the moment. In the particular case of this article, this is one of the simpler ones. Here’s how we’ll do it:

function populate(...names) {
    return function(...elements) {
        return Object.freeze(
            elements.map(function (values) {
                return Object.freeze(names.reduce(
                    function (result, name, index) {
                        result[name] = values[index];
                        return result;
                    },
                    Object.create(null)
                ));
            })
        );
    };
}

If that one feels dense, here’s one that’s more readable:

function populate(...names) {
    return function(...elements) {
        const objects = [];

        elements.forEach(function (values) {
            const object = Object.create(null);

            names.forEach(function (name, index) {
                object[name] = values[index];
            });

            objects.push(Object.freeze(object));
        });

        return Object.freeze(objects);
    };
}

With that kind of function at hand, we can create the same array of tabbed objects like so:

const tab_kinds = populate(
    "label"
)(
    [ "Summary" ],
    [ "Details" ]
);

Each array in the second call represents the values of resulting objects. Now let’s say we want to add more properties. We’d need to add a new name to the first call and a value to each array in the second call.

const tab_kinds = populate(
    "label",
    "color",
    "icon"
)(                                          
    [ "Summary", colors.midnight_pink, "💡" ],
    [ "Details", colors.navi_white, "🔬" ]
);

Given some whitespace, you could make it look like a table. That way, it’s much easier to spot an error in huge definitions.

You may have noticed that populate returns another function. There are a couple of reasons to keep it in two function calls. First, I like how two contiguous calls create an empty line that separates keys and values. Secondly, I like to be able to create these sorts of generators for similar objects. For example, say we need to create those label objects for different components and want to store them in different arrays.

Let’s get back to the example and see what we gained with the populate function:

import React, { useState } from "react";
import populate_label from "./populate_label";

const tabs = populate_label(
    [ "Summary" ],
    [ "Details" ]
);

const [ summary_tab, details_tab ] = tabs;

function TabbedContainer({ summary_children, details_children }) {
    const [ active, setActive ] = useState(summary_tab);

    return (
        <div className="tabbed-container">
            <div className="tabs">
                {tabs.map((tab) => (
                    <label
                        key={tab.label}
                        className={tab === active ? "active" : ""}
                        onClick={() => {
                            setActive(tab);
                        }}
                    >
                        {tab.label}
                    </label>
                )}
            </div>
            <div className="tabbed-content">
                {summary_tab === active && summary_children}
                {details_tab === active && details_children}
            </div>
        </div>
    );
}

export default TabbedContainer;

Using primitive objects makes writing UI logic straightforward.

Using functions like populate is less cumbersome for creating these objects and seeing what the data looks like.

Check That Radio

One of the alternatives to the approach above that I’ve encountered is to retain the active state — whether the tab is selected or not — stored as a property of the tabs object:

const tabs = [
    {
        label: "Summary",
        selected: true
    },
    {
        label: "Details",
        selected: false
    },
];

This way, we replace tab === active with tab.selected. That might seem like an improvement, but look at how we would have to change the selected tab:

function select_tab(tab, tabs) {
    tabs.forEach((tab) => tab.selected = false);
    tab.selected = true;
}

Because this is logic for a radio button, only a single element can be selected at a time. So, before setting an element to be selected, we first need to make sure that all the other elements are unselected. Yes, it’s silly to do it like that for an array with only two elements, but the real world is full of longer lists than this example.

With a primitive object, we need a single variable that represents the selected state. I suggest setting the variable on one of the elements to make it the currently-selected element or setting it to undefined if your implementation allows for no selection.

With multi-choice elements like checkboxes, the approach is almost the same. We replace the selection variable with an array. Each time an element is selected, we push it to that array, or in the case of Redux, we create a new array with that element present. To unselect it, we either splice it or filter out the element.

let selected = []; // Nothing is selected.

// Select.
selected = selected.concat([ to_be_selected ]);

// Unselect.
selected = selected.filter((element) => element !== to_be_unselected);

// Check if an element is selected.
selected.includes(element);

Again, this is straightforward and concise. You don’t need to remember if the property is called selected or active; you use the object itself to determine that. When your program becomes more complex, those lines would be the least likely to be refactored.

In the end, it is not a list element’s job to decide whether it is selected or not. It shouldn’t hold this information in its state. For example, what if it’s simultaneously selected and not selected in several lists at a time?

Alternative To Strings

The last thing I’d like to touch on is an example of string usage I often encounter.

Text is a good trade-off for interoperability. You define something as a string and instantly get a representation of a context. It’s like getting an instant energy rush from eating sugar. As with sugar, the best case is that you get nothing in the long term. That said, it is unfulfilling, and you inevitably get hungry again.

The problem with strings is that they are for humans. It’s natural for us to distinguish things by giving them a name. But a program doesn’t understand the meaning of those names.

Most code editors and integrated development environments (IDEs) don’t understand strings. In other words, your tools won’t tell you whether or not the string is correct.

Your program only knows whether two strings are equal or not. And even then, telling whether strings are equal or unequal doesn’t necessarily provide an insight into whether or not any of those strings contain a typo.

Objects provide more ways to see that something is wrong before you run your program. Because you cannot have literals for primitive objects, you would have to get a reference from somewhere. For example, if it’s a variable and you make a typo, you get a reference error. There are tools that could catch that sort of thing before the file is saved.

If you were to get your objects from an array or another object, then JavaScript won’t give you an error when the property or an index does not exist. What you get is undefined, and that’s something you could check for. You have a single thing to check. With strings, you have surprises you might want to avoid, like when they’re empty.

Another use of strings I try to avoid is checking if we get the object we want. Usually, it’s done by storing a string in a property named id. Like, let’s say we have a variable. In order to check if it holds the object we want, we might need to check if a string in the id property matches the one we expect it to. To do that, we would first check if the variable holds an object. If the variable does hold an object, but the object lacks the id property, then we get undefined, and we’re fine. However, if we have one of the bottom values in that variable, then we are unable to ask for the property directly. Instead, we have to do something to either make sure that only objects arrive at this point or to do both checks in place.

const myID = "Oh, it's so unique";

function magnification(value) {
    if (value && typeof value === "object" && value.id === myID) {
        // do magic
    }
}

Here’s how we can do the same with primitive objects:

import data from "./the file where data is stored";

function magnification(value) {
    if (value === data.myObject) {
        // do magic
    }
}

The benefit of strings is that they are a single thing that could be used for internal identification and are immediately recognizable in logs. They sure are easy to use right out of the box, but they are not your friend as the complexity of a project increases.

I find there’s little benefit in relying on strings for anything other than output to the user. The lack of interoperability of strings in primitive objects could be solved gradually and without the need to change how you handle basic operations, like comparisons.

Wrapping Up

Working directly with objects frees us from the pitfalls that come with other methods. Our code becomes simpler because we write what your program needs to do. By organizing your code with primitive objects, we are less affected by the dynamic nature of JavaScript and some of its baggage. Primitive objects give us more guarantees and a greater degree of predictability.

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Kirill Myshkin)
<![CDATA[How To Enable Collaboration In A Multiparty Setting]]> https://smashingmagazine.com/2023/06/enable-collaboration-multiparty-setting/ https://smashingmagazine.com/2023/06/enable-collaboration-multiparty-setting/ Fri, 02 Jun 2023 16:00:00 GMT As Artificial Intelligence becomes more widespread and pervasive, the transition to a data-driven age poses a conundrum for many: Will AI replace me at my job? Can it become smarter than humans? Who is making the important decisions, and who is accountable?

AI is becoming more and more complex, and tools like ChatGPT, Siri, and Alexa are already a part of everyday life to an extent where even experts struggle to grasp and explain the functionality in a tangible way. How can we expect the average human to trust such a system? Trust matters not only in decision-making processes but also in order for societies to be successful. Ask yourself this question: Who would you trust with a big personal or financial decision?

Today’s banking counseling sessions are associated with various challenges: Besides preparation and follow-up, the consultant is also busy with many different tasks during the conversation. The cognitive load is high, and tasks are either done on paper or with a personal computer, which is why the consultant can’t engage sufficiently with the client. Clients are mostly novices who are not familiar with the subject matter. The consequent state of passivity or uncertainty often stems from a phenomenon known as information asymmetry, which occurs when the consultant has more or better information than the client.

In this article, we propose a new approach based on co-creation and collaboration in advisory services. An approach that enables the consultant to simply focus on the customers’ needs by leveraging the assistance of a digital agent. We explore the opportunities and limitations of integrating a digital agent into an advisory meeting in order to allow all parties to engage actively in the conversation.

Rethinking Human-Machine Environments In Advisory Services

Starting from the counseling session described above, we tackled the issues of information asymmetry, trust building, and cognitive overload within the context of a research project.

Understanding the linguistic landscape of Switzerland with its various Swiss-German dialects, the digital agent “Mo” supports consultants and clients in banking consultations by taking over time-consuming tasks, providing support during the consultation, and extracting information. By means of an interactive table, the consultation becomes a multimodal environment in which the agent acts as a third interaction partner.

The setup enables a collaborative exchange between interlocutors, as information is equally visible and accessible to all parties (shared information). Content can be placed anywhere on the table through natural, haptic interactions. Whether the agent records information in the background, actively participates in the composition of a stock portfolio, or warns against risky transactions, Mo “sits” at the table throughout the entire consultation.

To promote active participation from all parties during the counseling session, we have pinpointed crucial elements that facilitate collaboration in a multi-party setting:

  • Shared Device
    All information is made equally visible and interactable for all parties.
  • Collaborative Digital Agent
    By using human modes of communication, social cues, and the support of local dialects, the agent becomes accessible and accepted.
  • Comprehensible User Interfaces
    Multimodal communication helps to convey information in social interactions. Through the use of different output channels, we can convey information in different complexities.
  • Speech Patterns for Voice User Interfaces
    Direct orders to an AI appear unnatural in a multi-party setting. The use of different speech and turn-taking patterns allows the agent to integrate naturally into the conversation.

In the next sections, we will take a closer look at how collaborative experiences can be designed based on those key factors.

“Hello Mo”: Designing Collaborative Voice User Interfaces

Imagine yourself sitting at the table with your bank advisor in a classic banking advisory meeting. The consultant tries to explain to you a ton of banking-specific stuff, all while using a computer or tablet to display stock price developments or to take notes on your desired transactions. In this setup, it is hard for consultants to keep up a decent conversation while retrieving and entering data into a system. This is where voice-based interactions save the day.

When using voice as an input method during a conversation, users do not have to change context (e.g., take out a tablet, or operate a screen with a mouse or keyboard) in order to enter or retrieve data. This helps the consultant to perform a task more efficiently while being able to foster a personal relationship with the client. However, the true strength of voice interactions lies in their ability to handle complex information entry. For example, purchasing stocks requires an input of multiple parameters, such as the title or the number of shares. Where in a GUI, all of these input variables have to be tediously entered by hand, VUIs offer an option of entering everything with one sentence.

Nonetheless, VUIs are still uncharted territory for many users and are accordingly viewed with a huge amount of skepticism. Thus, it is important to consider how we can create voice interactions that are accessible and intuitive. To achieve this goal, it is essential to grasp the fundamental principles of voice interaction, such as the following speech patterns.

Command and Control

This pattern is widely used by popular voice assistants such as Siri, Alexa, and Google Assistant. As the name implies, the assistants are addressed with a direct command — often preceded by a signal “wake word.” For example,

“Hey, Google” → Command: “Turn on the Bedroom Light”

Conversational

The Conversational Pattern, in which the agent understands intents directly from the context of the conversation, is less common in productive systems. Nevertheless, we can find examples in science fiction, such as HAL (2001: A Space Odyssey) or J.A.R.V.I.S. (Iron Man 3). The agent can directly extract intent from natural speech without the need for a direct command to be uttered. In addition, the agent may speak up on his own initiative.

As the Command and Control approach is widely used in voice applications, users are more familiar with this pattern. However, utilizing the Conversational Pattern can be advantageous, as it enables users to interact with the agent effortlessly, eliminating the requirement for them to be familiar with predefined commands or keywords, which they may formulate incorrectly.

In our case of a multi-party setting, users perceived the Conversational Pattern in the context of transaction detection as surprising and unpredictable. For the most part, this is due to the limitations of the intent recognition system. For example, during portfolio customization, stock titles are discussed actively. Not every utterance of a stock title corresponds to a transaction, as the consultant and client are debating possibilities before execution. It is fairly difficult or nearly impossible for the agent to distinguish between option and intent. In this case, command structures offer more reliability and control at the expense of the naturalness of the conversation since the Command and Control Pattern results in unnatural interruption and pauses in the conversation flow. To get the best of both worlds (natural interactions and predictable behavior), we introduce a completely new speech pattern:

Conversational Confirmation

Typically, transaction intents are formulated according to the following structure:

Interlocutor 1: We then buy 20 shares of Smashing Media Stocks (intent).
Interlocutor 2: Yes, let’s do that (confirmation).
Interlocutor 1: All right then, let’s buy Smashing Media Stocks (reconfirmation).

In the current implementation of the Conversational Pattern, the transaction would be executed after the first utterance, which was often perceived to be irritating. In the Conversational Confirmation pattern, the system waits for both parties to confirm and executes the transaction only after the third utterance. By adhering to the natural rules of human conversation, this approach meets the users’ expectations.

Conclusion

  1. Regarding the users’ mental model of digital agents, the Command and Control Pattern provides users with more control and security.
  2. The Command and Control Pattern is suitable as a fallback in case the agent does not understand an intent.
  3. The Conversational Pattern is suitable when information has to be obtained passively from the conversation. (logging)
  4. For collaborative counseling sessions, the Conversational Confirmation Pattern could greatly enhance the counseling experience and lead to a more natural conversation in a multi-party setting.
Sharing Is Caring: The Concept Of The Shared Device

In a world where personal devices such as PCs, mobile phones, and tablets are prevalent, we have grown accustomed to interacting with technical devices in “single-player mode.” The use of private devices undoubtedly has its advantages in certain situations (as in not having to share the million cute cats we google during work with our boss). But when it comes to collaborative tasks — sharing is caring.

Put yourself back into the previously described scenario. At some point, the consultant is trying to show stock price trends on the computer or tablet screen. However, regardless of how the screen is positioned, at least one of the participants has limited vision. Due to the fact that the computer is a personal device of the consultant, the client is excluded from actively engaging with it — leading to the problem of unequal distribution of information.

By integrating an interactive tabletop projection into the consultation meeting, we aimed to overcome the limitations of “personal devices,” improving trust, transparency, and decision empowerment. It is essential to understand that human communication relies on various channels, i.e., modalities (voice, sight, body language, and so on), which help individuals to express and comprehend complex information more effectively. The interactive table as an output system facilitates this aspect of human communication in the digital-physical realm. In a shared device, we use the physical space as an interaction modality. The content can be intuitively moved and placed in the interaction space using haptic elements and is no longer bound to a screen. These haptic tokens are equally accessible to all users, encouraging especially novice users to interact and collaborate on a regular tabletop surface.

The interactive tabletop projection also makes information more comprehensible for users. For example, during the consultation, the agent updates the portfolio visualization in real time. The impact of a transaction on the overall portfolio can be directly grasped and pulled closer by the client and advisor and used as a basis for discussion.

A result is a transparent approach to information, which increases the understanding of bank-specific and system-specific processes, consequently improving trust in the advisory service and leading to more interaction between customer and advisor.

Apart from the spatial modality, the proposed mixed reality system provides other input and output channels, each with its unique characteristics and strengths. If you are interested in this topic this article on Smashing provides a great comparison of VUIs and GUIs and when to use which.

Conclusion

The proposed mixed reality system fosters collaboration since:

  1. Information is equally accessible to all parties (reducing information asymmetry, fostering shared understanding, and building trust).
  2. One user interface can be operated collectively by several interaction partners (engagement).
  3. Multisensory human communication can be transferred to the digital space (ease of use).
  4. Information can be better comprehended due to multimodal output (ease of use).
Next Stop: Collaborative AI (Or How To Make A Robot Likable)

For consultation services, we need an intelligent agent to reduce the consultant’s cognitive load. Can we design an agent that is trustworthy, even likable, and accepted as a third collaboration partner?

Empathy For Machines

Whether it’s machines or humans, empathy is crucial for interactions, and social cues are the salt and pepper to achieve this. Social cues are verbal or nonverbal signals that guide conversations and other social interactions by influencing our perceptions of and reactions toward others. Examples of social cues include eye contact, facial expressions, tone of voice, and body language. These impressions are important communicative tools because they provide social and contextual information and facilitate social understanding. In order for the agent to appear approachable, likable, and trustworthy, we have attempted to incorporate social elements while designing the agent. As described above, social cues in human communication are transported through different channels. Transferring to the digital context once again requires the use of multimodality.

The visual manifestation of the agent enables the elaboration of character-defining elements, such as facial expressions and body language in digital space, analogous to the human body. Highlighting important context information, such as indicating system status.

In terms of voice interactions, social cues play an important role in system feedback. For example, a common human communication practice is to confirm an action by stating a short “mhm” or “ok.” Applying this practice to the agent’s behavior, we tried to create a more transparent and natural feeling VUI.

When designing voice interactions, it’s important to note that the agent’s perception is heavily influenced by the speech pattern utilized. Once the agent is addressed with a direct command, it is assigned a subordinate role (servant) and is no longer perceived as an equal interaction partner. Recognizing the intent of the conversation independently, the agent is perceived as more intelligent and trustworthy.

Mo: Ambassador Of System Transparency

Despite great progress in Swiss German speech recognition, transaction misrecognition still occurs. While dealing with an imperfect system, we have tried to take advantage of it by leveraging the agent to make system-specific processes more understandable and transparent. We implemented the well-known usability heuristic: the more comprehensible system-specific processes are, the better the understanding of a system and the more likely users feel empowered to interact with it (and the more they trust and accept the agent).

A core activity of every banking consultation meeting is the portfolio elaboration phase, where the consultant, client, and agent try to find the best investment solutions. In the process of adjusting the portfolio, transactions get added and removed with the helping hand of the agent. If “Mo” is not fully confident of a transaction, “Mo” checks in and asks whether the recognized transaction has been understood correctly.

The agent’s voice output follows the usual conventions of a conversation: as soon as an interlocutor is unsure regarding the content of a conversation, he or she speaks up, politely apologizes, and asks if the understood content corresponds to the intent of the conversation. In case the transaction was misunderstood, the system offers the possibility to correct the error by adjusting the transaction using touch and a scrolling token (Microsoft Dial). We deliberately chose these alternative input methods over repeating the intent with voice input to avoid repetitive errors and minimize frustration. By giving the user the opportunity to take action and be in control of an actual error situation, the overall acceptance of the system and the agent are strengthened, creating a nutritious soil for collaboration.

Conclusion:

  • Social cues provide opportunities to design the agent to be more approachable, likable, and trustworthy. They are an important tool for transporting context information and enabling system feedback.
  • Making the agent part of explaining system processes helps improve the overall acceptance and trust in both the agent and the system (Explainable AI).
Towards The Future

Irrespective of the specific consulting field, whether it’s legal, healthcare, insurance, or banking, two key factors significantly impact the quality of counseling. The first factor involves the advisor’s ability to devote undivided attention to the client, ensuring their needs are fully addressed. The second factor pertains to structuring the counseling session in a manner that facilitates equal access to information for all participants, presenting it in a way that even inexperienced individuals can understand. By enhancing customer experience through promoting self-determined and well-informed decision-making, businesses can boost customer retention and foster loyalty.

Introducing a shared device in counseling sessions offers the potential to address the problem of information asymmetry and promote collaboration and a shared understanding among participants. Does this mean that every consultation session depends on the proposed mixed reality setup? For physical consultations, the interactive tabletop projection (or an equivalent interaction space where all participants have equal access to information) does enable a democratic approach to information — personal devices just won’t do the job.

In the context of digital (remote) consultations, collaboration, and transparency remain crucial, but the interaction space undergoes significant changes, thereby altering the requirements. Regardless of the specific interaction space, careful consideration must be given to conveying information in an understandable manner. Utilizing different modalities can enhance the comprehensibility of user interfaces, even in traditional mobile or desktop UIs.

To alleviate the cognitive load on consultants, we require a system capable of managing time-consuming tasks in the background. However, it is important to acknowledge that digital agents and voice interactions remain unfamiliar territory for many users, and there are instances where voice processing falls short of users’ high expectations. Nevertheless, speech processing will certainly see great improvements in the next few years, and we need to start thinking today about what tomorrow’s interactions with voice assistants might look like.

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Hannah Kühne & Madlaina Kalunder)
<![CDATA[iA Presenter: A Case Study On Product Pricing Considerations]]> https://smashingmagazine.com/2023/06/ia-presenter-case-study-product-pricing-considerations/ https://smashingmagazine.com/2023/06/ia-presenter-case-study-product-pricing-considerations/ Fri, 02 Jun 2023 08:00:00 GMT This article is a sponsored by iA

So, you’ve created a thing. That thing could be anything, say a product the world never knew it needed or maybe a stellar SaaS app that makes everyone way more productive. You had a brilliant idea and took the initiative to make it happen. It’s time to put it on the market!

But wait… how much money are you going to charge for this thing? That’s often a way more difficult question to answer than it might seem. I mean, slap a price on the tin, and that’s it, right?

The truth is that pricing a product or service is one of the more challenging aspects of product development. Pricing is an inexact science, and chances are you will not get it right the first time. But where do you even begin?

That’s where the team at Information Architects — commonly known as iA — found itself when tasked with pricing a new product called iA Presenter. iA already had a hit product on its hands, the popular iA Writer app, with its claim to fame being a minimal, distraction-free writing interface. iA Writer is already a mature offering, having been available for many years and having undergone several significant iterations since its initial release. How does a new offering like iA Presenter fit into the picture?

Let’s use iA Presenter to study the considerations that go into product pricing. Its status as a brand-new product that sits alongside an existing product with an established history makes iA Presenter an interesting case study on pricing. Plus, the iA team was generous enough to share a bunch of the research and work that went into their pricing for iA Presenter.

Finding Pricing Parallels

The first step to pricing might be looking at what others are doing. Chances are that you are not the only player in the market, and you can certainly learn by observing what others are doing. I know that’s what I did when getting into the pricing of a SaaS-based app. There were plenty of competitors in that particular market, and mapping them out in a spreadsheet was a nice way to compare the similarities and differences — not only in the prices themselves but the pricing models as well. Some were one-time purchases, but many were recurring subscriptions. Some offered free trials, while others relied on a generous return policy. Some required a credit card upfront, and others allowed you to jump right into the app. You get the idea. There’s more to pricing than meets the eye.

The key is to find parallels between what others are doing and what aligns with what you’re doing. If everyone else is selling subscriptions, then maybe that’s clear enough for you to do the same. Or perhaps it’s more of an opportunity to differentiate your product, offering a pricing model that might appeal to an overlooked segment of the market.

The purpose of finding parallels is to prevent sticker shock by setting a price that is far outlier from what the rest of the market has already set.

iA says it extremely well in a blog post that’s incredibly transparent with their findings:

“As you can see, the pricing ranges from $5 to $25 per user. There are outliers on the upper scale. Some of them offer a free model for individuals or low-usage cases. As you already know, they can do that because they have venture capital or run on an ad-based model (Google). Google and PowerPoint come as part of a suite.”
—iA, “Presenter Pricing (I)

Ah! There’s always a story lurking in the details. Outliers can exist, and they might actually be on the low end of the spectrum. Competing on price alone always feels like a risky call; just ask any company that’s had to play along with Walmart’s aggressive tactics to be a low-price leader.

Identifying Opportunities

Perhaps the most important lesson from my own pricing research is that finding parallels in the market will also provide a clearer picture of what value your product provides. Does your product do something that the others don’t? Is it so much easier to use than the rest that the user experience is where the value comes from?

Add those things to the spreadsheet! The spreadsheet becomes more of a matrix than a competitor list. You can use it to surface what’s unique about your product and lean into it when determining the overall value your product offers compared to everyone else.

Again, the iA team throws a bit of a curveball based on its recent experience:

“Whether a price is low, high, or right depends on what [customers] compare it to. Customers will compare apples and oranges”.
—iA, “Presenter Pricing (I)

Did you catch that last point? You may need to find pricing parallels with products that are tangentially related to your market because you can’t control what you might be compared to. My own pricing journey was on a hosted calendar, and while it has way less in common with something like Google Calendar, customers would inevitably compare our offering to it because Google Calendar is such a common point of reference when talking about anything related to online calendars.

Starting The Conversation

The topic of pricing usually comes up during product development but could certainly come much sooner. The closer the finish line for development gets, the more the reality sets in that there’s work to do to get the product to market, and pricing is one step that simply cannot be skipped — how else will customer compensate you for the pleasure of getting their hands on a product?

You could start spewing numbers until one resonates with you, but that’s rather subjective. Will your customers see the same value in the product that you do? It’s worth checking, and sometimes it works to directly ask your customers — whether it’s existing customers or a target audience you’ve identified.

That’s what iA did when they published the question “How Much Would You Charge for iA Presenter?” in the aforementioned blog post from November 2022. The post provides oodles of context for readers to get an idea of what the iA team was already considering and what they’ve learned from an initial round of research on different pricing models.

What I like about this approach is the transparency, sure, but also how it leads to two other things:

  • Setting expectations
    iA had already introduced iA Presenter in another post that precedes the call for pricing opinions. But in bringing pricing to the forefront, the team is giving existing and potential customers a heads-up of what’s to come. So, even if they settled on a high price point that is an outlier in the market, at least everyone is already familiar with the thinking behind it.
  • Data
    Posing the question means they had opened the door for customers to weigh in. That’s the sort of feedback that can be designed as a survey, with the data helping inform pricing experiments and identify insightful patterns.
Parsing Information

Have you ever had to design a survey? Good gosh, that can be a frustrating experience. The challenge is to get useful feedback that leads to insights that allow you to make better decisions. But the process is all too easy to mess up, from choosing the wrong type of form input for a particular question or, worse, injecting your own biases into how things are worded. Surveys can be as much a balancing act as product pricing!

That’s why I find iA’s approach so interesting. They had the idea to ship not one version of the survey but three. This is what they shared with us:

“We divided our newsletter’s subscribers into different groups of roughly 5000 people each and sent them different versions of the form. The first group received the Version 0 of the form, and each time we updated this one, we sent it to a different group.

In retrospect, it’s clear why, but we didn’t expect the form design to affect the price suggestions so much. A lot has been written about A/B testing, form design, and questionnaire design. But here we were right in the middle of a form/questionnaire experiment and saw how directly the design affected the results. It was amazing to see all of this happening in real-time.”

It was a genius move, even if it wasn’t obvious at first. Sending three versions sent to different segments of the audience does a few things:

  • It considers different scenarios.
    Rather than asking its audience what pricing model they prefer, iA assumed a pricing model and put it in front of users. This way, they get a reaction to the various pricing scenarios they are considering and gain a response that is just as useful as directly asking.
  • It challenges assumptions.
    The iA team put a lot of legwork into researching pricing models and evaluating their pros and cons. That certainly helped the team form some opinions about which strategies might be the most effective to implement. But even all the research digging in the world doesn’t guarantee a particular outcome. Evaluating responses from a clearly defined target audience using three versions of the form allowed iA to put its assumptions to the test. Is a subscription-based model really the best way to go? Now they know!
  • It reveals customer biases.
    Anything you ask will have a degree of bias in it, so why not embrace that fact and let the customers show you their biases in the process? One version of the iA Presenter survey was based on a subscription pricing model, and the team found that some users hate subscriptions so much that they refused to fill out this form and were quite vocal about it.

I love the way iA sums up the patterns they found in the survey results and how those results were influenced by differentiating the surveys:

“We offered a form that required you to fill out monthly and yearly subscriptions plus ownership. […] We offered a second version that didn’t require you to fill out all fields. What happened there raised brows. The price suggestions changed. They got lower. We continued changing the form, and every time, the result changed.”

And with that, iA had unlocked what they needed to determine a price for iA Presenter. From a follow-up blog post that reports their findings:

“All data combined, you decided that iA Presenter should charge the industry standard of 5.- for a single license. Multiplying 5.- times twelve for a year and times three to make it worthwhile would make iA Presenter make a 150.- app.”
—iA, “Presenter Pricing (II)

Aligning Data With Strategy

Great! iA was able to determine a specific price point with some level of scientific certainty. It would be easy enough to slap that on a price tag and start selling, but that doesn’t do justice to the full picture the data provides. Specifically, iA learned that the price point they determined would not align with all of the audience segments they surveyed.

Here’s more of what they were willing to share with us about their audience’s feelings on pricing:

  • The collective audience suggested charging the industry standard of $5 for a single license.
  • Some think that the $50 price for the existing iA Writer app is high. $100 is not that much in Switzerland, but in some countries, $100 can be a big chunk of a monthly salary. That means local pricing adjustments ought to be considered.
  • Suggestions for business subscriptions varied between $10 and $20 per month per license.
  • Students want a free tier of access.

iA is lucky enough to have an internal source of useful data, thanks to the long sales history it has with iA Writer. They found that new customers tend to prefer a subscription model, while existing (or “convinced”) customers show a preference for a single purchase.

So, it’s more like they were looking at different pricing tiers instead of a flat rate. Their audience is all over the map as far as what their pricing expectations are, and a pricing model that offers choices based on the type of customer you are (e.g., business vs. student) and where people are geographically is likely to cast a wider net to attract more customers than they would get from a single price point. So, even if verified students are able to get the product for free, that should be offset by the price points for single-license customers and businesses.

Wrapping Up

What we’ve looked at are several important considerations that go into product pricing. The work it takes to determine a price goes way past subjective guesses. Pricing is one of the “Four Ps of Marketing” that influence a product’s market position and how customers perceive it.

Setting a price is a statement of the product’s quality and the value it adds to the market.

That’s the sort of thing you can’t leave to chance.

That said, it’s clear that determining a product price is far from an exact science. The challenge is to elicit the right information that leads to insights that are more reflective of and aligned with the expectations of the target audience. Will they pay the price you want?

There are many other considerations that go into pricing, to be sure. You might discover that the price the market is willing to pay is unsustainable and does not cover enough of the costs that went into product development or the ongoing costs of maintenance, developing new features, marketing, support, salaries, and so on. You don’t want to enter yourself in a race to the bottom, after all.

iA Presenter makes for a great case study on product pricing. The fact that it’s the type of software that those of us in the web design and development community often work on makes it an extremely relevant example. Plus, iA put so much effort into research and was generous enough to share it with us that it provides a nice recent snapshot of a real-world situation.

And, hey, now that you know everything that went into setting prices for iA Presenter, you should check it out. Do you think they made the right choice? Will the multi-tier pricing strategy work next to market competitors who are more mature and are able to practically give away their stuff for free, like Google Slides? We’ll find out soon as iA Presenter is officially out of beta and has been released to the public on June 1st. You can follow along with their ongoing journey of shipping a new product on their blog or by signing up for their newsletter.

]]>
hello@smashingmagazine.com (Geoff Graham)
<![CDATA[Advanced Form Control Styling With Selectmenu And Anchoring API]]> https://smashingmagazine.com/2023/06/advanced-form-control-styling-selectmenu-anchoring-api/ https://smashingmagazine.com/2023/06/advanced-form-control-styling-selectmenu-anchoring-api/ Thu, 01 Jun 2023 10:00:00 GMT <selectmenu>, that will make styling this type of form control a whole lot better. You’re going to walk through an early implementation of this new experimental element by creating a pattern that you would never have thought possible with CSS alone — a radial selection menu.]]> No doubt you’ve had to style a <select> menu before. And when you do, you often have had to reach far down in your CSS arsenal of tricks or rely on JavaScript to get anything near the level of customization you want. It’s a long-running headache in the front-end world.

Well, thanks to the efforts of the Open UI community, we have a new <selectmenu> element to look forward to, and its purpose is to provide CSS styling affordances to selection menus in ways we’ve never had before.

We’re going to demonstrate an initial implementation of <selectmenu> in this article. But we’ll throw in a couple of twists while we’re at it. What we’re making is a radial select menu, something we could never have done with CSS alone. And since we're working with experimental tech, we’re going to toss in more experimental features along the way, including images, the HTML Popover API, and the CSS Anchor Positioning API. The result is going to wind up like this:

  • <selectmenu>: This is the selector itself. It holds the button and listbox of menu options.
  • button: This part toggles the visibility of the listbox between open and close.
  • selected-value: This displays the value of the menu option that is currently selected. So, if you have a listbox with three options and the second option is selected, the second option is what matches the part.
  • marker: Dropdown menus usually have some sort of downward-facing arrow icon to indicate that the menu can be expanded. This is that part of the menu.
  • listbox: This is the wrapper that contains the options and any <optgroup> elements that group certain options together inside the listbox.
  • <optgroup>: We already let the cat out of the bag on this one, but this part groups options together. It includes a label for the group.
  • <option>: A value that the user is able to select in the menu. There can be one, but it’s much more common to see a <select> — and, by extension — a <selectmenu> with multiple options.

The other way is to slot the content ourselves in HTML. This can be a nice approach since it allows us to customize the markup any way we like. In other words, we can replace any of the parts we want, and the browser will use our markup instead of the implicit structure. In fact, this is the approach we’ll use in the radial menu we’re making.

The way to replace parts in the HTML is to use the slots. The markup we use for a slot lives in a separate tree in the Shadow DOM, replacing the contents of the DOM with what we specify in the Shadow DOM.

Here’s an abbreviated example in HTML. Notice how the <button> and listbox are both contained in slots that represent the HTML we want to use for those parts.

<selectmenu class="my-custom-select">
  <div slot="button">
    <span behavior="selected-value" slot="selected-value"></span>
    <button behavior="button"></button>
  </div>
  <div slot="listbox">
    <div popover="auto" behavior="listbox">
       <option value="one">one</option>
       <option value="two">two</option>
    </div>
  </div>
</selectmenu>

By using slots and behavior as attributes, we can tell the browser how it should behave and how it should interact with keyboard navigation. If managed carefully, this will also mean that we get good accessibility out of the box because the browser will know how to behave based on what we define.

Ready? OK, let’s start by setting up our markup for our radial <selectmenu>.

The Radial Selectmenu Markup

We will start by creating our own markup for this basic example. We will use pretty much the same approach as used in the explainer of the Selectmenu element because I think it demonstrates the vast flexibility we have to style this element using similar markup.

<selectmenu class="selectmenu">
  <button class="selected-button" slot="button" behavior="button">
    <span behavior="selected-value" class="selected-value"></span>
  </button>
  <div slot="listbox">
    <div popover behavior="listbox">
      <option value="one">one</option>
      <option value="two">two</option>
      <option value="three">three</option>
      <option value="four">four</option>
      <option value="five">five</option>
      <option value="six">six</option>
    </div>
  </div>
</selectmenu>

You might notice from the markup that we’ve added the selected-value behavior in the button. This is perfectly fine, as our button will always show the selected value by doing this.

And, just like the example in the explainer, we are using the Popover API inside of our listbox slot. When we look at what we have in Chrome Canary, and see that it already works fine. Take note that even keyboard navigation already seems to be handled for us!

We can add the following formula for our options when the popover is open by adding a transform to our options:

[popover]:popover-open option {
  /* Half the size of the circle */
  --half-circle: calc(var(--circle-size) / -2);

  /* Straighten things up and space them out */
  transform:
      rotate(var(--deg))
      translate(var(--half-circle))
      rotate(var(--negative-deg));
}

Now, when the popover-open state is triggered, we will rotate each option by a certain number of degrees, translate them along both axes by half the circle size, and rotate it once again by a negative amount of degrees. The order of the transforms is important!

I said we would rotate the options “by a certain number of degrees” because we have to do it for each individual option. This is totally possible in vanilla CSS (and that’s how we’re going to do it), but it could also be done with a Sass loop or even with JavaScript if we needed it.

Let’s add this to our popover style rules:

[popover] {
  --rotation-divide: calc(180deg / 2);

  /* etc. */
}

This will be our default rotation, and it’s a special case for when we only have one option. We’ll use 360deg for the rest in a moment.

For now, we can select the first option and set the --rotation-divide variable on it:

option:nth-child(1) {
  --deg: var(--rotation-divide);
}

Great! Why you would use a select when there is only one option, I don’t know, but nevertheless, it’s handled gracefully:

Styling the other options takes a bit of work because we have to:

  • Divide the circle by the number of available options and
  • Multiply that result for each option.

I’m so glad we have the calc() function in CSS to help us do this. Otherwise, it would be some pretty heavy lifting.

[popover]:has(option:nth-child(2)) {
  --rotation-divide: calc(360deg / 2);
}

[popover]:has(option:nth-child(3)) {
  --rotation-divide: calc(360deg / 3);
}

[popover]:has(option:nth-child(4)) {
  --rotation-divide: calc(360deg / 4);
}

[popover]:has(option:nth-child(5)) {
  --rotation-divide: calc(360deg / 5);
}

[popover]:has(option:nth-child(6)) {
  --rotation-divide: calc(360deg / 6);
}

option:nth-child(1) {
  --deg: var(--rotation-divide);
}

option:nth-child(2) {
  --deg: calc(var(--rotation-divide) * 2);
}

option:nth-child(3) {
  --deg: calc(var(--rotation-divide) * 3);
}

option:nth-child(4) {
  --deg: calc(var(--rotation-divide) * 4);
}

option:nth-child(5) {
  --deg: calc(var(--rotation-divide) * 5);
}

option:nth-child(6) {
  --deg: calc(var(--rotation-divide) * 6);
}

/* that’s enough options for you! */
option:nth-child(1n + 7) {
  display: none;
}

Here’s a live demo of what this produces. Remember, Chrome Canary is the only browser that currently supports this, as long as the experimental features flag is enabled.

See the Pen Radial selectmenu with Anchoring API - Open UI [forked] by @utilitybend.

Do We Need All Those :has() Pseudo-Classes?

Yeah, I think so, as long as we’re using plain CSS. And that’s been my goal all along. That said, JavaScript could be useful here.

For example, we could add an ID to the element with the popover attribute and count the children it contains:

const optionAmount = document.getElementById('popoverlistbox').childElementCount;
popoverlistbox.style.setProperty('--children', optionAmount);

That way, we can replace all the :has() instances with more concise styles:

option {
  --rotation-divide: calc(360deg / var(--children));
  --negative: calc(var(--deg) / -1);
}

For this demo, however, you might still want to limit the --children custom property to a maximum of 6. I’ve found that’s the sweet spot before the circle gets too crowded and needs additional tweaks.

See the Pen Radial selectmenu Open UI with JS children count [forked] by @utilitybend.

Let’s Animate This Thing

There are a few more CSS features coming up that will make animating popovers a lot easier. But they’re not ready for us yet, even for this example.

We can get around that with a little trick. But please keep in mind that what we’re about to do will not be the best practice when we get the new animating features. I wanted to give you the information anyway because I think it’s a nice enhancement for what we’re making.

First, let’s add the following to our popover selector:

[popover] {
  display: block;
  position: absolute;
  /* etc. */
}

This makes it so our popover will always be displayed block and ready to go wherever it is placed, and we have established a stacking context.

We will lose the benefit of our top layer popover and will have to play around with a z-index to get the effect we want. Juggling z-index values — especially with a large number of items — is never fun. It gets messy fast. That’s one of the ways popovers were designed to help us.

But let’s go ahead and give our button a z-index:

.selected-button {
  z-index: 2;
  /* etc. */
}

Now we can use animations to reveal the options by using the :not() pseudo-class. This is how we can reset the transform when the popover is in its closed state:

[popover]:not(:popover-open) {
  z-index: -1;
}

[popover]:not(:popover-open) option {
  transform: rotate(var(--deg)) translate(0) rotate(var(--negative-deg));
}

And there you have it! An animated radial <selectmenu>:

See the Pen Radial selectmenu with Anchoring API and animation [forked] by @utilitybend.

Let’s Add Some Images While We’re At It

There was quite a bit of discussion about this in the Open UI community, but the selected value does not take innerHTML as an option as, for one, this could result in IDs being duplicated. But I sure do love a good old role-playing game, and I decided to use the <selectmenu> as a potion selector.

This is completely based on everything we just covered, only adding images to demonstrate that it is possible:

See the Pen Open-UI - Select a potion (Chrome Canary) [forked] by @utilitybend.

With a sprinkle of JavaScript (for this totally optional enhancement), we can select the innerHTML of the <selectmenu> element and pass it to our .selected-value button:

const selectMenus = document.querySelectorAll("selectmenu");
selectMenus.forEach((menu) => {
  const selectedvalue = menu.querySelector(".selected-value");
  selectedvalue.innerHTML = menu.selectedOption.innerHTML;
  menu.addEventListener("change", () => {
    selectedvalue.innerHTML = menu.selectedOption.innerHTML;
  });
});
Conclusion

I don’t know about you, but all of this gets me super excited for the future. Everything we looked at, from the Selectmenu element to the CSS Anchor Position API, is still a work in progress. Still, we can already see the great number of possibilities they will open up for us as designers and developers.

The fact that all of this is coming by way of built-in browser features is what’s most exciting because it gives us a standard way to approach things like customized <select> menus, popovers, and anchoring to the extent that it could eliminate the need for frameworks or libraries that we use today for the same things. We win because we get more control, and users win because they get lighter page loads.

If you’d like to do a bit of research on Selectmenu or even get involved with the Open UI community, you’re more than welcome, as we need more developers to create demos and share their struggles to help make these features better if — and when — they ship.

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Brecht De Ruyte)
<![CDATA[Create Your Own Path In June (2023 Wallpapers Edition)]]> https://smashingmagazine.com/2023/05/desktop-wallpaper-calendars-june-2023/ https://smashingmagazine.com/2023/05/desktop-wallpaper-calendars-june-2023/ Wed, 31 May 2023 08:00:00 GMT There’s an artist in everyone. Some bring their ideas to life with digital tools, others capture the perfect moment with a camera or love to grab pen and paper to create little doodles or pieces of lettering. And even if you think you’re far from being an artist, well, it might just be hidden deep inside of you. So why not explore it?

For more than twelve years already, our monthly wallpapers series has been the perfect opportunity to do just that: to break out of your daily routine and get fully immersed in a creative little project. This month was no exception, of course.

In this collection, you’ll find beautiful, unique, and inspiring wallpapers designed by creative folks who took on the challenge this month. All of them are available in versions with and without a calendar for June 2023 and can be downloaded for free. As a little bonus goodie, we also compiled a selection of timeless June wallpapers from our archives at the end of this post. Maybe you’ll spot one of your almost-forgotten favorites in there, too? A big thank-you to everyone who shared their designs with us this month! Happy June!

  • You can click on every image to see a larger preview,
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
  • Submit a wallpaper!
    Did you know that you could get featured in our next wallpapers post, too? We are always looking for creative talent.

World Environment Day

“An annual event celebrated on June 5th to raise awareness and promote action for the protection of the environment. It serves as a global platform for individuals, communities, and governments to come together and address pressing environmental issues. So I decided to design this wallpaper and to promote awareness among us. Hope you like it.” — Designed by Hrishikesh Shome from India.

Back In My Days

Designed by Ricardo Gimenes from Sweden.

Boundless Joy

“Boundless Joy is a magical realm where children and dogs find pure delight. It’s a place where laughter echoes through sunlit meadows and imaginations take flight. In this enchanting world, youthful spirits soar as kids and their furry companions chase dreams, playfully bound together. With every step, Boundless Joy sparks smiles, ignites friendships, and creates memories that last a lifetime.” — Designed by Kasturi Palmal from India.

Cuban Bartender

“Summer arrives and with it the long days and nights that allow us to enjoy the weather. We are heading to Cuba and from the Malecón we observe the city waiting for the new day.” — Designed by Veronica Valenzuela from Spain.

Blue Butterfly

“Captured with Sony A7II and FE 90mm F2.8 Macro lens. Macro photography is my favorite.” — Designed by Viktor Hanacek from Czechia.

Holding Out For Me

“Effectively captures the essence of a girl observing the view outside through a window. It conveys the image of someone attentively observing or gazing at what’s happening outside, suggesting a sense of curiosity or contemplation.” — Designed by Bhabna Basak from India.

Pre-Wash Instructions

Designed by Ricardo Gimenes from Sweden.

Summer Palms

“Looks like Bahamas, but these are from San Francisco! Yep, photographers’ secrets!” — Designed by Viktor Hanacek from Czechia.

Raise A Glass To World Milk Day

“World Milk Day is a reminder to appreciate the nourishing qualities of milk and the impact it has on our well-being. Whether enjoyed on its own, added to a smoothie, or used to create mouthwatering recipes, milk is a versatile and wholesome ingredient that deserves to be celebrated.” — Designed by PopArt Studio from Serbia.

Oldies But Goodies

So many wonderful wallpaper designs have seen the light of day since we first embarked on this monthly journey. Below you’ll find a selection of favorites from past June editions. Please note that these wallpapers don’t come with a calendar.

Create Your Own Path

“Nice weather has arrived! Clean the dust off your bike and explore your hometown from a different angle! Invite a friend or loved one and share the joy of cycling. Whether you decide to go for a city ride or a ride in nature, the time spent on a bicycle will make you feel free and happy. So don’t wait, take your bike and call your loved one because happiness is greater only when it is shared. Happy World Bike Day!” — Designed by PopArt Studio from Serbia.

Summer Coziness

“I’ve waited for this summer more than I waited for any other summer since I was a kid. I dream of watermelon, strawberries, and lots of colors.” — Designed by Kate Jameson from the United States.

Old Kyiv

“This picture is dedicated to Kiev (Kyiv), the capital of Ukraine. It is loosely based on a 13th century map — this is what the center of Kyiv looked like ca. 900 years ago! The original map also included the city wall — however, I decided not to wrap the buildings into the wall, since in my dream world, a city would not need walls.” — Designed by Vlad Gerasimov from Georgia.

Travel Time

“June is our favorite time of the year because the keenly anticipated sunny weather inspires us to travel. Stuck at the airport, waiting for our flight but still excited about wayfaring, we often start dreaming about the new places we are going to visit. Where will you travel to this summer? Wherever you go, we wish you a pleasant journey!” — Designed by PopArt Studio from Serbia.

Strawberry Fields

Designed by Nathalie Ouederni from France.

Oh, The Places You Will Go!

“In celebration of high school and college graduates ready to make their way in the world!” — Designed by Bri Loesch from the United States.

Expand Your Horizons

“It’s summer! Go out, explore, expand your horizons!” — Designed by Dorvan Davoudi from Canada.

Summer Surf

“Summer vibes…” — Designed by Antun Hirsman from Croatia.

Summertime

Designed by Ricardo Gimenes from Sweden.

Deep Dive

“Summer rains, sunny days, and a whole month to enjoy. Dive deep inside your passions and let them guide you.” — Designed by Ana Masnikosa from Belgrade, Serbia.

Join The Wave

“The month of warmth and nice weather is finally here. We found inspiration in the World Oceans Day which occurs on June 8th and celebrates the wave of change worldwide. Join the wave and dive in!” — Designed by PopArt Studio from Serbia.

Melting Away

Designed by Ricardo Gimenes from Sweden.

Bauhaus

“I created a screenprint of one of the most famous buildings from the Bauhaus architect Mies van der Rohe for you. So, enjoy the Barcelona Pavillon for your June wallpaper.” — Designed by Anne Korfmacher from Germany.

World Environment Day

“On June 5th, we celebrate World Environment Day — a moment to pause and reflect on how we impact Earth’s health. A few activities represented in this visual include conserving energy and water, shopping and growing local, planting flowers and trees, and building a sustainable infrastructure.” — Designed by Mad Fish Digital from Portland, OR.

Pineapple Summer Pop

“I love creating fun and feminine illustrations and designs. I was inspired by juicy tropical pineapples to celebrate the start of summer.” — Designed by Brooke Glaser from Honolulu, Hawaii.

Window Of Opportunity

“‘Look deep into nature and then you will understand everything better,’ A.E.” — Designed by Antun Hiršman from Croatia.

Midsummer Night’s Dream

“The summer solstice in the northern hemisphere is nigh. Every June 21 we celebrate the longest day of the year and, very often, end up dancing like pagans. Being landlocked, we here in Serbia can only dream about tidal waves and having fun at the beach. What will your Midsummer Night’s Dream be?” — Designed by PopArt Studio from Serbia.

Papa Merman

“Dream away for a little while to a land where June never ends. Imagine the ocean, feel the joy of a happy and carefree life with a scent of shrimps and a sound of waves all year round. Welcome to the world of Papa Merman!” — Designed by GraphicMama from Bulgaria.

Gravity

Designed by Elise Vanoorbeek (Doud Design) from Belgium.

Solstice Sunset

“June 21 marks the longest day of the year for the Northern Hemisphere — and sunsets like these will be getting earlier and earlier after that!” — Designed by James Mitchell from the United Kingdom.

Yoga Is A Light, Which Once Lit, Will Never Dim

“You cannot always control what goes on outside… you can always control what goes on inside… Breathe free, live and let your body feel the vibrations and positiveness that you possess inside you. Yoga can rejuvenate and refresh you and ensure that you are on the journey from self to the self. Happy International Yoga Day!” — Designed by Acodez IT Solutions from India.

Summer Things

“Summer is coming so I made this simple pattern with all my favorite summer things.” — Designed by Maria Keller from Mexico.

Night Night!

“The time we spend with our dads is precious so I picked an activity my dad enjoys a lot, reading.” — Designed by Maria Keller from Mexico.

Evolution

“We’ve all grown to know the month of June through different life stages. From toddlers to adults with children, we’ve enjoyed the weather with rides on our bikes. As we evolve, so do our wheels!” — Designed by Jason Keist from the United States.

Handmade Pony Gone Wild

“This piece was inspired by the My Little Pony cartoon series. Because those ponies irritated me so much as a kid, I always wanted to create a bad ass pony.” — Designed by Zaheed Manuel from South Africa.

Getting Better Everyday

“Inspired by the eternal forward motion to get better and excel.” — Designed by Zachary Johnson-Medland from the United States.

Comfort Reading

Designed by Bobby Voicu from Portugal.

Happy Squatch

“I just wanted to capture the atmosphere of late spring/early summer in a fun, quirky way that may be reflective of an adventurous person during this time of year.” — Designed by Nick Arcarese from the United States.

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[Designing A Better Design Handoff File In Figma]]> https://smashingmagazine.com/2023/05/designing-better-design-handoff-file-figma/ https://smashingmagazine.com/2023/05/designing-better-design-handoff-file-figma/ Fri, 26 May 2023 15:30:00 GMT Creating an effective handoff process from design to development is a critical step in any product development cycle. However, as any designer knows, it can be a nerve-wracking experience to send your carefully crafted design off to the dev team. It’s like waiting for a cake to bake — you can’t help but wonder how it will evolve in the oven and how it will taste when you take it out of the oven.

The relationship between designers and developers has always been a little rocky. Despite tools like Figma’s Inspect feature (which allows developers to inspect designs and potentially convert them to code in a more streamlined way), there are still many barriers between the two roles. Often, design details are hidden within even more detailed parts, making it difficult for developers to accurately interpret the designer’s intentions.

For instance, when designing an image, a designer might import an image, adjust its style, and call it done. More sophisticated designers might also wrap the image in a frame or auto layout so it better matches how developers will later convert it to code. But even then, many details could still be missing. The main problem here is that designers typically create their designs within a finite workspace (a frame with a specific width). In reality, however, the design elements will need to adapt to a variety of different environments, such as varying device sizes, window widths, screen resolutions, and other factors that can influence how the design is displayed. Therefore, developers will always come back with the following questions:

  • What should be the minimum/maximum width/height of the image?
  • What is its content style?
  • What effects need to be added?

As in reality, these are the details needed to be addressed.

Designers, let’s face the truth: there’s no perfect handoff.

Every developer works, thinks, and writes code differently, which means there is no such thing as the ideal handoff document. Instead, our focus should be on creating a non-perfect but still effective and usable handoff process.

In this article, we will explore how to create a design handoff document that attempts to strike the right balance between providing developers with the information they need while still allowing them the flexibility to bring the design to life in their own way.

How Can The Handoff Files Be improved?

1. Talk To Developers More Often

Design is often marked as complete once the design handoff file is created and the developers start transforming it into code. However, in reality, the design is only complete when the user finds the experience pleasant.

Therefore, crafting the design handoff file and having the developer help bring your design to the user is essentially another case study on top of the one you have already worked on. To make it perfect, just as you would talk to users, you also need to communicate with engineers — to better understand their needs, how they read your file, and perhaps even teach them a few key things about using Figma (if Figma is your primary design tool).

Here are a few tips you can teach your developers to make their lives easier when working with Figma:

Show Developers The Superpower Of The Inspect Panel

Figma’s Inspect feature allows developers to see the precise design style that you’ve used, which can greatly simplify the development process. Additionally, if you have a design library in place, Inspect will display the name of each component and style that you’ve used. This can be incredibly helpful for developers, especially if they’re working with a style guide, as they can use the component or style directly to match your design with ease.

In addition to encouraging developers to take advantage of the Inspect panel, it’s sometimes helpful to review your own design in a read-only view. This allows you to see precisely what the developers will see in the Inspect panel and ensures that components are named accurately, colors are properly linked to the design system, and other vital details are structured correctly.

Share With Developers The Way To Export Images/Icons

Handling image assets, including icons, illustrations, and images, is also an essential part of the handoff process, as the wrong format might result in a poor presentation in the production environment.

Be sure to align with your developers on how they would like you to handle the icons and images. It could either be the case where they would prefer you to export all images and icons in a single ZIP file and share it with them or the case where they would prefer to export the images and icons on their own. If it’s the latter, it’s important to explain in detail the correct way to export the images and icons so that they can handle the export process on their own!

Encourage Them To Use Figma’s Commenting Feature

It’s common for developers to have questions about the design during the handoff process. To make it easier for everyone involved, consider teaching them to leave comments directly in Figma instead of sending you a message. This way, the comments are visible to everyone and provide context for the issue at hand. Additional features, such as comment reactions and the “mark as resolved” button, further enable better interaction between team members and help everyone keep track of whether an issue has been addressed or not.

Leverage Cursor Chat

If you and the developers are both working within the same Figma file, you can also make use of the cursor chat feature to clarify any questions or issues that arise. This can be a fun and useful way to collaborate and ensure that everyone is on the same page.

Use Figma Audio Chat

If you need to discuss a complex issue in more detail, consider using Figma’s audio chat feature. This can be a quick and efficient way to clarify any questions or concerns arising during the development process.

It’s important to keep in mind that effective collaboration relies on good communication. Therefore, it’s crucial to talk to your developers regularly and understand their approach to reading and interpreting your designs, especially when you first start working with them. This sets the foundation for a productive and successful partnership.

2. Documenting Design Decisions For You And Developers

We have to be honest, the reason why building our design portfolios often takes a lot of time is the fact that we do not document every design decision along the way, and so we need to start building the case studies later by trying our best to fetch the design files and all the stuff we need.

I find it useful to document my decisions in Figma, not only just designs but, if appropriate, also competitor analysis, problem statements, and user journeys, and leave the links to these pages within the handoff file as well. The developer might not read it, but I often hear from the developers in my team that they like it as they can also dig into what the designers think while working on the design, and they can learn tips for building a product from us as well.

3. Don’t Just Leave The Design There. Add The Details

When it comes to design, details matter — just leaving the design “as is” won’t cut it. Adding details not only helps developers better understand the design, but it can also make their lives easier. Here are some tips for adding those crucial design details to your handoff.

Number The Frame/Flow If Possible

I really like the Figma handoff template that Luis Ouriach (Medium, Twitter) created. The numbering and title pattern makes it easy for developers to understand which screen belongs to which flow immediately. However, it can be complicated to update the design later as the numbering and title need to be manually updated.

Note: While there are plugins available (like, for example, Renamed), which can help with renaming multiple frames and layers all at once, this workflow can still be inconvenient when dealing with more complicated naming patterns. For instance, updating “1. Welcome → 2. Onboarding → 3. Homepage” into “1. Welcome → 2. Onboarding → 3. Sign up → 4. Homepage” can become quite a hassle. Therefore, one alternative approach is to break down the screens into different tickets or user journeys and assign a number that matches each ticket/user journey.

Name The Layers If Possible

We talked about numbering/naming the frames, but naming the layers is equally important! Imagine trying to navigate a Figma file cluttered with layers and labels like “Frame 3123,” “Rectangle 8,” and “Circle 35.” This can be confusing and time-consuming for both designers and developers, as they need to sift through numerous unnamed layers to identify the correct one.

Well-named layers facilitate better collaboration, as team members can quickly locate and comprehend the purpose of each element. This also helps ensure consistency and accuracy when translating designs into code.

If you search around in Figma, you will find a number of plugins that can help you with naming the layers in a more systematic way.

Add The Details For Interaction: Make Use Of Figma’s Section Feature

This might seem trivial, but I consider it important. Design details shouldn’t be something like “This design does X, and if you press that, it will do Y.” Instead, it’s crucial to include details like the hover state, initial state, max width/height, and the outcome of different use cases.

For this reason, I appreciate the new section feature that Figma has released. It allows me to have a big design at the top so that developers can see all of the design at once and then look at the section details for all the design and interaction details.

Make Use Of The Interactive Prototype And FigJam Features To Show The User Flow

Additionally, try to share with the developers how the design screens connect to one another. You can use the interactive prototype feature within Figma to connect the screens and make them move so that developers can understand the logic. Alternatively, you can use FigJam to connect the screens, allowing developers to see how everything is connected at a glance.

4. The Secret Weapon Is Adding Loom Video

Loom video is a lifesaver for us. You only need to record it once, and then you can share it with anyone interested in the details of your design. Therefore, I highly recommend making use of Loom! For every design handoff file, I always record a video to walk through the design. For more complicated designs, I will record a separate video specifically describing the details so that I don’t need to waste other people’s time if they’re not interested.

To attach the Loom video, I use the Loom plugin and place it right beside the handoff file. Developers can play it as many times as needed without even disturbing you, asking you more questions, and so on.

→ Download the Loom Embed Figma plugin

5. The Biggest Fear: Version Control

In an ideal world, the design would be completely finalized before developers start coding. But in reality, design is always subject to adjustments, even after development has already begun. That’s why version control is such an important topic.

Although Figma has a branching feature for enterprise customers to create new designs in a separate branch, I find it helpful to keep a few extra things in your design file.

Have A Single Source Of Truth

Always ensure that the developer handoff file you share with your team is the single source of truth for the latest design. If you make any changes, update the file directly, and keep the original as a duplicate for reference. This will prevent confusion and avoid pointing developers to different pages in Figma.

If you have access to the branching feature in Figma, it can be highly beneficial to utilize it to further streamline your workflow. When I need to update a handoff file that I have already shared with the developers, my typical process is to create a new branch in Figma first. Then I update the developer handoff file in that branch, send it to the relevant stakeholders for review, and finally merge it back into the original developer handoff file once everything is confirmed. This ensures that the link to the developer handoff file remains unchanged for the developers.

Changelogs/Future Plan

Include a changelog in the handoff file to help developers understand the latest changes made to the design.

Similarly to changelogs, if you already know of future plans to adjust the design, write them down somewhere in Figma so that the developers can understand what changes are to be expected.

6. Make Use Of Plugins

There are also a number of plugins to help you with creating your handoff:

  • EightShapes Specs
    EightShapes Specs creates specs for your design automatically with just one click.
    → Download the EightShapes Spec Figma plugin
  • Autoflow
    Autoflow allows you to connect the screens visually without using FigJam.
    → Download the Autoflow Figma plugin
  • Style Organizer
    Style Organizer allows you to make sure all of your styles are linked to your component/style so that developers won’t need to read hex code in any case.
    → Download the Style Organizer Figma plugin
7. The Ultimate Goal Is To Have A Design System

If you want to take things a step or two further, consider pushing your team to adopt a design system. This will enable the designs created in Figma to be more closely aligned with what developers expect in the code. You can match token names and name your layers/frames to align with how developers name their containers and match them in your design system.

Here are some of the benefits of using a design system:

  • Consistency
    A design system ensures a unified visual language across different platforms, resulting in a more consistent user experience.
  • Efficiency
    With a design system in place, designers and developers can reuse components and patterns, reducing the time spent on creating and updating individual elements.
  • Collaboration
    A design system facilitates better communication between designers and developers by establishing a shared language and understanding of components and their usage.

Note: If you would like to dig deeper into the topic of design systems, I recommend reading some of the Smashing Magazine articles on this topic.

Conclusion: Keep Improving The Non-perfect

Ultimately, as I mentioned at the beginning, there’s no one-size-fits-all approach to developer handoff, as it depends on various factors such as product design and the engineers we work with. However, what we can do is work closely with our engineers, communicate with them regularly, and collaborate to find solutions that make everyone’s lives easier. Just like our designs, the key to successful developer handoff is prioritizing good communication and collaboration.

Further Reading

  • Design Handoffs,” Interactive Design Foundation
    Design handoff is the process of handing over a finished design for implementation. It involves transferring a designer’s intent, knowledge, and specifications for a design and can include visual elements, user flows, interaction, animation, copy, responsive breakpoints, accessibility, and data validations.
  • A Comprehensive Guide to Executing The Perfect Design-to-Development Handoff,” Phase Mag
  • Design Handoff 101: How to handoff designs to developers,” Zeplin Blog
    Before we had tools like Figma, design handoff was a file-sharing nightmare for designers. When UI designs were ready for developers to start building, nothing could begin until designers manually added redlines to their latest local design file, saved it as a locked Sketch or Photoshop file or a PDF, and made sure developers were working on the correct file after every update. But those design tools completely changed the way teams collaborate around UI design — including the way design handoff happens.
  • How to communicate design to developers (checklist),” Nick Babich
  • A Front-End Developer’s Ode To Specifications,” Dmitriy Fabrikant, Smashing Magazine
    In the physical world, no one builds anything without detailed blueprints because people’s lives are on the line. In the digital world, the stakes just aren’t as high. It’s called “software” for a reason: when it hits you in the face, it doesn’t hurt as much. But, while the users’ lives might not be on the line, design blueprints (also called design specifications or specs) could mean the difference between a correctly implemented design that improves the user experience and satisfies customers and a confusing and inconsistent design that corrupts the user experience and displeases customers. (Editor’s Note: Before tools like Figma were on the rise, it was even more difficult for designers and developers to communicate and so tools such as Specctr — which this article mentions — were much needed. As of today, this article from 2014 is a bit of a trip into history, but it will also give you a fairly good idea of what design blueprints are and why they are so important in the designer-developer handoff process.)
  • Everything Developers Need To Know About Figma,” Jurn van Wissen, Smashing Magazine
    Unlike most design software, Figma is free and browser-based, so developers can easily access the full design files making the developer handoff process significantly smoother. This article teaches developers who have nothing but a basic understanding of design tools everything they need to know to work with Figma.
  • Penpot, An Open-Source Design Platform Made For Designers And Developers Alike,” Mikołaj Dobrucki, Smashing Magazine
    In the ever-evolving design tools landscape, it can be difficult to keep up with the latest and greatest. In this article, we’ll take a closer look at Penpot, the first design and prototyping tool that’s fully open-source and based on open web standards, making it an ideal choice for both designers and developers. (Editor’s Note: Today, it’s not always “There’s only Figma.” There are alternatives, and this article takes a good look at one of them — Penpot.)
  • The Best Handoff Is No Handoff,” Vitaly Friedman, Smashing Magazine
    Design handoffs are inefficient and painful. They cause frustration, friction, and a lot of back and forth. Can we avoid them altogether? Of course, we can! Let’s see how to do just that.
]]>
hello@smashingmagazine.com (Ben Shih)
<![CDATA[Meet Success At Scale, A New Smashing Book By Addy Osmani]]> https://smashingmagazine.com/2023/05/success-at-scale-pre-release/ https://smashingmagazine.com/2023/05/success-at-scale-pre-release/ Thu, 25 May 2023 20:00:00 GMT Print and eBook shipping in fall 2023. Pre-order the book.]]> Today, we are very happy to announce our new book: Success at Scale, a curated collection of best-practice case studies capturing how production sites of different sizes tackle performance, accessibility, capabilities, and developer experience at scale. Case studies are from industry experts with guidance that stands the test of time.

Join Addy Osmani, your curator, as we dive into a nuanced look at several key topics that will teach you tips and tricks that may help you optimize your own sites. The book will also include short interviews with each contributor on what additional lessons, challenges, and tips they have to share some time after each case study was written.

High-quality hardcover. Curated by Addy Osmani. Cover art by Espen Brunborg. Print and eBook shipping in fall 2023. Pre-order the book.

Contents

Each section of the book is filled with case studies from real-world large-scale web applications and services, interviews with the people involved, and key takeaways to help you achieve the same success.

  • Performance includes examples of measuring, budgeting, optimizing, and monitoring performance, in addition to tips for building a performance culture.
  • Capabilities is about bridging the gap between native capabilities and the modern web. You’ll explore web apps, native apps, and progressive web applications.
  • Accessibility makes web apps viable for diverse users, including people with temporary or permanent disabilities. Most of us will have a disability at some point in our lives, and these case studies show how we can make the web work for all of us.
  • Developer Experience is about building a project environment and culture that encourage support, growth, and problem-solving within teams. Strong teams build great projects!
Who This Book Is For

This book is for professional web developers and teams who want to deliver high-quality web experiences. We explore dimensions like performance, accessibility, capabilities, and developer experience in depth. Success at Scale goes beyond beginner material to cover the pragmatic approaches required to tackle these challenges in the real world.

About the Author

Addy Osmani is an engineering leader working on Google Chrome. He leads up Chrome’s Developer Experience organization, helping reduce the friction for developers to build great user experiences.

Technical Details
  • ISBN: 978-3-910835-00-9 (print)
  • Quality hardcover, stitched binding, ribbon page marker.
  • Free worldwide airmail shipping from Germany starting in fall 2023.
  • eBook available for download in fall 2023 as PDF, ePUB, and Amazon Kindle.
  • Pre-order the book.
Community Matters ❤️

Producing a book takes quite a bit of time, and we couldn’t pull it off without the support of our wonderful community. A huge shout-out to Smashing Members for the kind, ongoing support. The eBook is and always will be free for Smashing Members as soon as it’s out. Plus, Members get a friendly discount when purchasing their printed copy. Just sayin’! ;-)

More Smashing Books & Goodies

Promoting best practices and providing you with practical tips to master your daily coding and design challenges has always been (and will be) at the core of everything we do at Smashing.

In the past few years, we were very lucky to have worked together with some talented, caring people from the web community to publish their wealth of experience as printed books that stand the test of time. Heather and Steven are two of these people. Have you checked out their books already?

Understanding Privacy

Everything you need to know to put your users first and make a better web.

Add to cart $44

Touch Design for Mobile Interfaces

Learn how touchscreen devices really work — and how people really use them.

Add to cart $44

Interface Design Checklists

100 practical cards for common interface design challenges.

Add to cart $39

]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[The Impact Of Agile Methodologies On Code Quality]]> https://smashingmagazine.com/2023/05/impact-agile-methodologies-code-quality/ https://smashingmagazine.com/2023/05/impact-agile-methodologies-code-quality/ Thu, 25 May 2023 11:00:00 GMT As software development continues to evolve, so too do the methodologies and approaches used to create it. In recent years, Agile methodologies have gained widespread adoption as a modern approach to software development, with a focus on flexibility, collaboration, and delivering working software in short increments. This is a key differentiator when it comes to other development workflows.

One of the key benefits of Agile methodologies is its impact on the quality of the code that ships. Code quality is an essential aspect of software development, as high-quality code is critical to ensure the reliability, maintainability, and scalability of any software, website, or application.

Overview Of Agile Methodologies

Agile methodologies are a set of software development approaches that prioritize flexibility, collaboration, and delivering working software in short increments. Agile methodologies aim to improve the quality of the software by allowing for frequent feedback, continuous improvement, and adaptation to changing requirements.

The Agile Manifesto, created in 2001 by a group of software developers who wanted to find a better way of developing software, outlines the core values and principles of Agile methodologies. These values include prioritizing individuals and interactions over processes and tools, working software over comprehensive documentation, customer collaboration over contract negotiation, and responding to change rather than following a concrete, long-term plan.

Agile methods break down projects into small and manageable units called sprints. Sprints are completed by cross-functional and self-organizing teams in a short period of time, usually two to four weeks. During each sprint, the team works on a specific set of tasks, and at the end of the sprint, they review their work, evaluate customer satisfaction, and identify areas for improvement. Because each sprint is focused on a specific set of tasks, the team can quickly pivot and adjust their approach if they receive new information or feedback from customers or stakeholders. This results in faster turnaround times and a more responsive development process which is essential for creating high-quality software that meets the needs of the end users.

There are several Agile methodologies that teams can choose from to develop software in a more flexible and iterative way.

  • Scrum: This is perhaps the most popular Agile methodology. It involves a small team of developers working together in short sprints to deliver a working product incrementally. Each sprint typically lasts for 2–4 weeks.
  • Kanban: This methodology focuses on continuous delivery and improving workflow efficiency. Work is broken down into smaller pieces and tracked on a visual board, and team members pull work items as they are ready to work on them. If you’ve used a Trello board before, then you know exactly how this works. Other apps, like Notion, offer similar features.
  • Extreme Programming (XP): XP is a methodology that emphasizes software quality and customer satisfaction. It involves practices such as pair programming, test-driven development, and continuous integration.
  • Lean Development: This methodology aims to reduce waste and increase efficiency in the development process. It involves continuous improvement and a focus on delivering value to the customer.
  • Crystal: This methodology is designed for small teams working on projects with a high degree of uncertainty. It involves frequent communication, regular feedback loops, and an emphasis on collaboration.
How Agile Methodologies Can Impact Code Quality

Code quality is one of the most essential aspects of any development process, as it directly impacts the success of any product. Agile methodologies have been designed to prioritize a customer-centric approach by breaking down features into smaller, manageable pieces of functionality. This allows for more frequent releases of working quality code that can be tested and reviewed to help deliver high-quality software that meets customer needs. Here are some practical ways in which Agile methodologies help promote and impact efficient code quality in development:

  • Prioritizing simplicity and efficiency.
    Agile methodologies prioritize simplicity and efficiency in software development. This means that developers are encouraged to write code that is not only functional but also easy to understand, test, debug, maintain, and modify. The goal is to create a codebase that is clean and simple, which can help reduce the potential for bugs and errors.
  • Encouraging modularization.
    The Agile process promotes the modularization of code. By breaking code down into smaller, modular components, developers can create code that is more flexible and reusable. This can save time and effort in the long run by reducing the need for repetitive or verbose code. Additionally, by optimizing the performance of each component, the developer is able to reduce the overall processing time, resulting in a more efficient codebase, breaking down features into smaller, more manageable pieces — often referred to as user stories or epics. This approach allows development teams to focus on delivering small, working pieces of functionality that can be tested and validated before being integrated into the larger codebase while also enabling them to respond quickly to changing requirements or feedback.
  • Improving readability.
    It’s important that code is legible and understood across the team, as it affects not only the developer who wrote the code but also other developers who may need to modify or maintain the code in the future. Agile methodologies help developers focus on writing code that is self-documenting and easy to understand by promoting the use of clear and concise coding practices such as self-descriptive naming conventions and avoiding complex code structures.
  • Test-Driven Development (TDD).
    TDD involves writing tests for the code before writing the code itself, which can help ensure that the code is well-structured and easy to read. This method emphasizes continuous feedback and improvement on the code, as developers are regularly provided with feedback on their work and have opportunities to make improvements as they go. By receiving feedback early on in the development process, developers can address issues and make changes to their code before they become bigger problems.
  • Continuous integration.
    This is a development practice that involves frequently integrating code changes from multiple developers into a single shared codebase. With continuous integration, code is automatically compiled, tested, and validated, which helps to catch issues early on in the development process. This approach ensures that code is always in a releasable state, which ultimately helps to improve code quality and reduce the risk of bugs or errors.

Overall, Agile methodologies can help developers write better code by promoting continuous code feedback and improvement while prioritizing simplicity and efficiency. By following these principles, developers can create code that is more efficient, maintainable, and robust, ultimately resulting in a better end product.

Key Principles Of Agile Development

At its core,

Agile methodologies value individuals and their interactions over following strict processes and tools.

This means that communication and collaboration between team members are prioritized to ensure everyone is working towards the same goals.

These processes are governed by a set of guiding principles that help the development team to create software that is tailored to the customer’s needs while ensuring high-quality delivery.

  • Customer satisfaction is the top priority.
    The goal of Agile development is to create software that meets the needs of the customer. This means that the customer is involved in every step of the process, from planning to testing.
  • Teamwork is essential.
    Cross-functional teams that work together to complete tasks are a core principle. This means that everyone on the team has a role to play, and everyone works together to achieve the same goal.
  • Flexibility is key.
    Everything about Agile development is designed to be flexible and adaptable. This means that the team can change course if needed, and the development process can be adjusted based on feedback from the customer.
  • Communication is critical.
    Open and honest communication between team members and the customer is encouraged. Everyone should feel empowered to share their ideas, concerns, and feedback.
  • Iterative development.
    Agile development involves breaking the development process down into smaller, more manageable pieces. By working on one sprint at a time, the team can make progress quickly and efficiently.
  • Continuous improvement.
    This means that the team is always looking for ways to improve the development process and make it more effective.
Prioritizing Collaboration And Communication

Effective collaboration and communication are crucial in any team-oriented project, and Agile methodologies place a particular emphasis on these values.

Prioritizing collaboration and communication ensures that everyone involved in the project is working towards the same goals and that any issues or concerns can be addressed quickly and effectively.

When collaboration and communication are prioritized, team members are encouraged to share their expertise and insights, which can lead to more creative and innovative solutions.

In an Agile environment, team members work closely together, and there is often a high level of interdependence between different areas of the project. If one team member is struggling or working in isolation, it can have a ripple effect on the rest of the team and ultimately impact the success of the project. Collaborating with other developers can help identify issues in the code that may not have been noticed otherwise. For example, another developer may notice a potential security vulnerability or identify a bug the original developer missed. Here are some of the key ways to ensure this:

  • Encourage cross-functional teams.
    Bringing together individuals with different skills and expertise can lead to stronger communication between business owners and the technical team that produces the product. I remember a time when I was working on a project with my team, and we divided the work based on each person’s strengths. This approach allowed everyone to contribute their best work to the project.
  • Break down silos.
    Silos refer to a situation where different teams or departments within an organization work in isolation from each other, without much communication or collaboration. Silos can lead to several negative outcomes, such as a lack of transparency, duplication of effort, and a slower development process. Eliminating barriers between individuals and teams would help foster collaboration by allowing individuals to share their skills and expertise.
  • Hold regular check-ins and feedback sessions.
    Scheduling consistent check-ins and feedback sessions can help ensure everyone is aligned on priorities and goals. I’ve found that this approach helps keep everyone motivated and focused on the end goal.
  • Use proper communication channels.
    Utilizing appropriate communication channels can increase the transparency and visibility of the project. In my experience, using tools like instant messaging (like Slack) and video conferencing (like Zoom) has helped facilitate collaboration and information sharing, particularly in a remote team environment.
  • Hold dedicated “Ask Me Anything”(AMA) sessions.
    AMA sessions can help frontline managers understand the rationale behind the approach and become comfortable with empowering their teams and giving up control. I remember a time when my team participated in one of these sessions, and it helped us better understand the benefits of Agile methodology because it put everyone on the same page and made everyone more confident in the overall direction.

Failing to prioritize collaboration and communication can have serious consequences for an Agile project. Miscommunications and misunderstandings can lead to delays, missed deadlines, and even project failure. Team members may become demotivated or disengaged if they feel they are working in isolation or not being heard. In the worst-case scenario, the lack of collaboration and communication can lead to a breakdown in the project team, which can be difficult to recover from.

Refactoring And Code Reviews

Refactoring refers to the process of improving the internal structure of code without changing its external behavior. It is done to enhance code readability, maintainability, and performance. On the other hand, code review is the process of examining code to identify issues or defects that may affect its quality, security, or functionality.

Refactoring

Refactoring is the process of restructuring existing code without changing its external behavior. It should be done frequently in Agile projects — often in the middle of a sprint — to keep the codebase clean and avoid technical debt. Here are some steps on how to carry out refactoring in Agile:

  • Identify the parts of the codebase that need refactoring.
  • Discuss with the team why refactoring is necessary and the benefits it can bring.
  • Prioritize the refactoring tasks based on their impact on the project.
  • Break down the refactoring tasks into small, manageable chunks.
  • Refactor the code while ensuring that it still passes all the tests.
  • Get feedback from the team and stakeholders on the refactored code.

Code Review

A code review is a process of systematically reviewing the code written by other team members. It aims to improve the code’s quality, find bugs, and ensure it adheres to coding standards. A code review should be done early and often in Agile projects to ensure that the codebase is always of high quality. Here are some steps on how to carry out a code review in Agile:

  • Assign a team member to review the code written by another team member.
  • Review the code for readability, maintainability, and adherence to the coding standards.
  • Provide feedback on the code and suggest improvements.
  • Discuss the feedback with the code author and come up with a plan to address the issues.
  • Make sure that the code changes are reviewed again after they are implemented to ensure that they meet the desired quality standards.

Overall, refactoring and code review are essential practices in Agile methodologies that help ensure the code is of high quality and meets the customer’s needs. By incorporating these practices into the development process, the team can improve collaboration, reduce technical debt, and deliver high-quality software faster.

Agile Compared To Traditional Workflows

Traditional workflows refer to development methodologies that follow a linear, sequential process, where each phase of development must be completed before moving on to the next phase, with a focus on ensuring that all requirements are clearly defined before development begins. Some examples of traditional workflows include the Waterfall model, the V-model, the Spiral model, and the Rational Unified Process. These methodologies are often referred to as “plan-driven” or “heavyweight” methodologies, as they involve extensive planning and documentation upfront, with less flexibility for changes during the development process.

Take a look at the Waterfall model, for example. This model, also known as the “classic life cycle model,” is based on a series of well-defined phases, with each phase depending on the successful completion of the previous one.

The phases of the Waterfall model typically include requirements gathering, design, implementation, testing, deployment, and maintenance. Once one phase is completed, the next phase begins, and there is no going back to the previous phase. This means that the Waterfall model follows a “top-down” approach, where each phase is dependent on the previous phase’s success. And, true to its name, the process resembles a waterfall.

One of the key characteristics of the Waterfall model is that it is heavily focused on planning and documentation. Before the development team begins coding, the project requirements and design specifications must be fully documented. This documentation is then used to guide the entire development process.

While the Waterfall model has been a popular development process for many years, it has several limitations. For instance, the linear and sequential nature of the model can be inflexible, making it challenging to incorporate changes and feedback throughout the development process. It also puts a lot of emphasis on up-front planning, which can be time-consuming and costly. Plus, we all know that even the best-laid plans don’t always go right.

As a result, many software development teams have shifted towards using Agile methodologies instead of the Waterfall model. Agile methodologies offer greater flexibility and collaboration, enabling teams to adjust their approach as they gather feedback and insights throughout the development process.

Here are some key differences between Agile methodologies and traditional workflows:

Agile Traditional
Flexibility Flexible and adaptable. Rigid and structured.
Customer involvement Prioritize customer involvement and feedback throughout the development process. Limited customer involvement, with the customer being presented with the final product at the end of the process.
Team structure Cross-functional and collaborative. Specialized and isolated.
Testing Occurs throughout the development process. Occurs the end of the development cycle.

While traditional workflows may have some advantages, such as providing a clear roadmap and a structured approach, I believe Agile methodologies are better suited for today’s fast-paced, ever-changing software development landscape. Agile methodologies offer the flexibility and adaptability necessary to meet changing requirements and deliver high-quality software products.

Conclusion

In conclusion, adopting Agile methodologies can have a significant positive impact on code quality. By prioritizing collaboration and communication, implementing test-driven development, and regularly conducting code reviews and refactoring, development teams can ensure that the code they produce is high-quality, maintainable, and meets the customer’s needs.

It’s worth noting that Agile methodologies are not without their challenges, such as the potential for scope creep. You can imagine how a flexible process that encourages frequent collaboration and feedback could lead to a project growing more legs than it needs. That said, Organizations that have adopted Agile methodologies report higher levels of customer satisfaction, faster time-to-market, and overall improved project success rates. As the industry continues to evolve, it’s likely that we will see more and more organizations embrace Agile methodologies to improve code quality and project outcomes.

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Sarah Oke Okolo)
<![CDATA[Smashing Podcast Episode 61 With Rachel Andrew: What Is Web Platform Baseline?]]> https://smashingmagazine.com/2023/05/smashing-podcast-episode-61/ https://smashingmagazine.com/2023/05/smashing-podcast-episode-61/ Tue, 23 May 2023 08:00:00 GMT In this episode of the Smashing Podcast, we’re talking about Web Platform Baseline. What is it, and how can it help determine your browser support policy? Drew McLellan talks to expert Rachel Andrew to find out.

Show Notes

Weekly Update

Transcript

Drew: She’s a web developer and technical writer and editor. She’s currently working for Google on the Chrome team where she’s a staff technical writer and content lead for web.dev and developer.chrome.com. Prior to Google, she spent 20 years as a freelancer and business owner and she’s written almost countless books and articles where she excels at taking complex technical subjects and making them more readily understandable. She’s also an experienced conference speaker, able to deliver a technical talk to teach an audience about CSS layouts or a keynote to inspire them drawing from her wealth of experience developing for the web. So we know she’s an experienced technical writer, teacher and developer, but did you know she once taught a Canada goose to make a bourbon cocktail? My smashing friends, please welcome back Rachel Andrew. Hi Rachel, how are you?

Rachel: I’m smashing.

Drew: Welcome back to the podcast. It’s been a couple of years and theres been a change of day-to-day role for you.

Rachel: Yes, yes. I guess last time I was here it was mid pandemic and I was still editor-in-chief of Smashing Magazine and yes, these days I’m over at Google on the DevRel team with my content team sort of helping to get good docs and information out to our developers about things on the web platform.

Drew: So still in the realms of helping people learn about the web platform and assisting their busy lives, trying to keep a pace of all the new technologies and developments?

Rachel: Yes. Yeah, it’s kind of a perfect role for someone who spent most of their life sort of explaining things to web developers. So yeah, it’s great and within a really great team of people who were very dedicated to talking about all this new stuff.

Drew: So speaking of new developments and also Google, last week was Google I/O 2023, which is always an exciting time for us tech nerds because there are all sorts of announcements and updates from Google. With Google being such a large contributor to the web platform, it then becomes an exciting time to see whats been worked on for the web in particular and see what might be coming out next. I feel like we’re in a place with a web platform where it’s continuing to develop a fantastic pace at the moment.

Rachel: Yeah.

Drew: Those of us who have been working in the industry for a while remember the years when nothing was added in terms of browser capabilities, I mean sometimes years at a time. You were working on the web back then. Was it frustrating that things weren’t getting added or did it just make it easier to keep up?

Rachel: I think it was frustrating. You know, when we had, we had five years between IE6 and IE7 so that was kind of five years that the web platform just basically stopped because so many people were using IE6, although there were new other browsers around you couldn’t really use all the new stuff that they were putting into the browser because the majority of people coming to your website were in a browser that didn’t support it. So I think it was very frustrating because that’s a very, very long time, especially when IE6 had all sorts of bugs and issues as well so that we weren’t getting fixes to things.

Rachel: It wasn’t even new features. We were dealing with problems, like bits of your content disappearing for no apparent reason. So yeah, it was frustrating, but it was very stable. Buggy but at least the bugs that we could list them, there were websites that listed all of the IE6 CSS problems, so you’d hit one and you’d be like, oh yeah, that’s that. I know how to fix that. So we all became pretty expert in dealing with browser bugs basically and knowing what they were.

Drew: I remember things like Peekaboo, was it Peekaboo bug was that era.

Rachel: Yes.

Drew: And what was the website that listed them, listed them all? I can’t remember it’s name now, but the list of known bugs just got longer and longer and longer over time to the point where it became difficult to find the one you were, the particular bug you were experiencing because the list was so long. We were in a place back then where the dominant browser, which was Internet Explorer at the time, was the browser that was seeing the least technical innovation but that doesn’t mean there was no technical innovation because there was a broader ecosystem, but was it ever possible to use new bits of CSS that were appearing in things like Firefox? Is that something we could do when the dominant browser was so far behind?

Rachel: It was pretty hard. I mean, I think all the ideas of things like polyfills and also there was a lot of us kind of pushing the progressive enhancement story as well and saying, look, it’s fine, your website doesn’t need to look the same in all browsers. I think I’ve been saying that for most of my life at this point. And that was a big thing at the time because people were just sort of A/B test in the browsers, you know, there was no... you’re sensing off to your client and they would just open it in another browser and be like, "Oh no, this is wrong 'cause it’s three pixels out on this other browser."

Rachel: And that was very, very common. People would talk about pixel perfect and what they would typically mean is it should be exactly the same as the PDF or whatever that you were working from or the Photoshop file and all of the browsers that they were aware of, or at least both browsers typically. So I think it was quite difficult to push the web forward at the time, you got quite a lot of resistance and you’d often have to just do it anyway and hope you’d get away with it quite a lot of the time.

Drew: We don’t seem to see that so much these days where clients or anyone really is looking at a web experience side by side in two different browsers and saying, oh, they’re not quite the same. Is that because browsers are much more standardized now and they do look the same or have the expectations changed, do you think, because of so many devices that we’re looking at, the fact that mobile devices and tablets and so many different screen sizes that has that expectation gone away?

Rachel: Yeah, I think it’s a bit of both, isn’t it? I think the web browser is how we do everything these days and it’s less of a separate bit of software, it’s just kind of how you use your computer and a lot of the time and I think theres less of an awareness of, oh, we should be checking this for someone who isn’t a developer, we should be checking this in the different browsers. Far more likely, I think, would be someone saying, "This doesn’t work well on my phone." 'Cause they’ll get the email saying, oh look at the new site, and they’re probably on their phone when they get that email and they’ll open it on their phone and then they find, oh, somethings overlaying something or it’s hard to get to something because of a toolbar or whatever.

Rachel: So I think it’s far more likely that a client is going to be coming back with that kind of problem. Maybe they’ve got an older version, an older phone that they’ve not updated and it’s got an older version of software on it or whatever than doing that kind of desktop A/B testing that used to be really common, even with a fairly non-technical client, they would’ve been told by someone that they should make sure it works in these browsers and so they would be doing that checking.

Drew: Yeah, I mean clients would come along to those of us who are building sites for them and they would say, right, we need this site built and it needs to work in IE6 or it needs to work in IE7 and they’d have these very definitive browser versions that things had to work in. And now between, as you mentioned, between IE6 and IE7, there was a multiple year gap, so that constraint from the client could have, it could massively impact your sort of choice of technology or design, couldn’t it?

Rachel: Oh, absolutely. Yeah, I mean that was just sort of fairly standard when you were building sites and at the time I was building sites for clients that would be on the spec for the site would be which browsers that you had to support and you would be expected to test it in those browsers and if it worked in those browsers, that was all good. That was the line that you were following.

Drew: Yeah, I guess even things, even that things were pretty limited. It was a fairly easy decision to make to say these are the browsers that we’re supporting. It’s got to work in IE7 for whatever reason.

Rachel: Yeah.

Drew: It was fairly clear cut, but these days I don’t think I could even tell you what version of Chrome or Firefox or Safari I’m running or if that’s the latest, I’m presuming it’s the latest, but it’s not so clear cut and straightforward now, is it?

Rachel: Right, yeah. You don’t even notice that the things update. They just update and you don’t realize if that’s a major version or just some say security release that’s come out that you need to update to. I don’t think most people know which features landed in which version of a browser. We used to know. We used to know exactly what was available in each browser, so it’d be like, "Oh great, this project is IE8 and therefore I’ve got, I don’t know, display table" or something that landed in that browser.

Rachel: We used to know. These days we don’t know. I know I spend all of my time documenting this stuff and writing about whats new in the web platform and even so, I’m fairly hazy. If you said to me, "Oh, what was in Chrome 113?" And I’ve just done the work on that, I’d be like, "Err, was that in that one or was that in the beta?" So the average developer then you’re not going to be able to keep track of all that stuff. Theres so much stuff landing all the time.

Drew: So it makes the situation quite difficult, doesn’t it, when you might have sometimes contracts with people you’re building stuff for and certainly expectations that theres going to be a level of browser support but it’s not, if you don’t know what versions things are and they move really quickly, it can be really difficult to pin down to a targeted browser version. And this is, I believe it’s the crux of the problem that’s addressed by one of the big announcements at Google I/O. How do we figure out whats safe to use?

Rachel: Yeah, and so this is something we’ve been thinking about actually for as long as I’ve been at Google is we’ve been thinking of this top pain point that we hear from developers that they struggle to keep up with the web platform and they struggle to know what is safe to use, what is okay to roll out in production without worrying about it. Typically developers will be building for the latest versions of a site and then suddenly they’ll realize that, oh, this is broken over here and they just don't, they didn’t realize that and to actually figure out the browser support involves going kind of property-by-property, feature-by-feature to say, can I use our MDN and looking at the compatibility data. It’s all out there, but you have to do that on a feature-by-feature basis.

Rachel: And so we’re kind of thinking about this issue and it always comes up, we talk to a lot of developers and it always comes up as the top problem and so we’re thinking about how we can resolve that. And that’s what kind of came to this idea of, well, can we create this line and say that everything that’s passed this line has interoperability, is kind of safe to use without worrying about it. And that’s where this idea of Baseline came from, to have this kind of moving line that includes all of the features that are interoperable and don’t have any major standout issues. And that’s what we’re calling Baseline.

Rachel: And the whole project is it’s not just a Google thing, this comes from the Web DX community group. So we’re working with other browsers and other people on defining this and kind of coming up with the feature groupings so that we can try and create this clarity for developers that they’ve got a sort of line where they can say, they can look at that and say, oh yes, this thing is in Baseline and therefore I know it’s going to work everywhere in the most modern browsers.

Drew: So instead of saying this, we’re supporting these particular browsers, you’re saying this is a core feature set that’s common across all the currently available browsers. This is a safe set of features and it’s that set that I’m going to be developing for compatibility with.

Rachel: Right, yeah. And that sort of takes that requirement to figure out each individual feature for, and also because we get partial implementations of stuff all the time on the platform and it’s like, so the kind of feature grouping part of this, it is the big piece of work really to actually identify, does the feature completely work everywhere because sometimes there will be support for things. I think one of the things that, an obvious thing that people understand is the gap property in where in Flexbox and Grid and so on. Now you could test for that. You could test for where the gap was supported and a browser would say yes because it was supported in grid layout even when it wasn’t supported in flex layout and therefore there was no way to check for this. And it was quite confusing for people if they were just doing that test. So I think theres these sort of groupings of things is also quite useful. So the things that are in Baseline are things that do work as a feature, even if that does actually involve various moving parts.

Drew: Yes, because theres been a trend from the sort of latest CSS specs to be, whats the word, sort of unifying some of the properties isn’t there rather than-

Rachel: Yes.

Drew:span> ... rather than having individual properties that do the same thing in different context, using the same-

Rachel: Right.

Drew:span> ... keywords across different uses.

Rachel: Yeah, so things like alignment, fragmentation, we’ve got these specifications that deal with sort of alignment across all of the different layout specs, which is great because it means that say if you want to switch from a flex to a grid layout or whatever, all the alignment stuff should work in the same way, but does mean that we potentially get these partial implementations and that’s quite difficult to understand. So yeah, I think it’s things like that and so that theres an awful lot actually goes into the creation of this sort of feature set grouping and we’re not all the way there yet. We’re hoping to get most of CSS and JavaScript done by the end of the year because it’s actually quite a job just to figure out how things all fit together.

Drew: So it’s almost like instead of targeting a version of any particular browser, we’re targeting a version of the web platform. We’re saying-

Rachel: Yeah.

Drew:span> ... look at the web platform as it is here today, these are the things that are universal, that are reliable to use and that’s what we’re going to support. And anything that falls out of that boundary included because the implementation might be patchy.

Rachel: Right, yeah. It might need a bit more care. And it’s not saying to people, oh, you can’t ever use these things, but if you know it’s not in Baseline then maybe theres some things you need to think about there and it might be fine for your project or it might be that it has a good fallback or it’s something that is polyfillable but those are things that you do need to think about on a case-by-case basis rather than just, this should be fine to use.

Drew: I think most of us are familiar with sites like canIuse.com, which you mentioned briefly before. Is this just replicating information that already exists or is it different from can I use?

Rachel: I think it’s different in that, so something that can I use does, and also the MDN BCD data, they work very much on a sort of feature-by-feature basis. They don’t actually cover all of the web platform. Theres definitely, certainly Can I use has made some decisions in terms of how to group certain things. I have a long standing open issue to split out fragmentation from multicar for example, because they’re bundled together, making multicar look harder to use than it actually is because there are fragmentation bugs in there.

Rachel: So they’ve done some of the same stuff, but what we haven’t got there is this sort of full view of the platform and this idea of this is within Baseline, this is out, you still have to go to each thing and make those decisions. Ideally we’re hoping, I mean as MDN are using Baseline on feature pages, they’re rolling that out at the moment. It’s probably saying that we’re hoping that Can I use, we’ll also be able to use and say, "Oh, this feature is in Baseline" as well as that more fine grained data.

Drew: And how do you make that decision to say that yes, this, not only is this supported but this is widely supported enough that we can include it in Baseline. How do you make that distinction?

Rachel: So at the moment we’re going back the last two major versions of browsers and theres been a lot of debate about that — as you can imagine. It’s something that’s great to [inaudible 00:17:38]. The fact is I think the line will always be wrong for if we say this is the line, two versions back, a lot of people are saying, "Oh, you should use minor versions of Safari" because we’ve seen some massive features going in doc releases because of the way that Safari do their versioning because obviously a main version of Firefox and Chrome, that’s every month we’ve got a new main version. And so that’s obviously up for debate. Some people are saying we should go further back. Other people are pointing out the fact that just because Chrome has updated, all of the browsers are derivatives that use chromium, they might not have updated. So I think the line will always be wrong, I think.

Rachel: But what it does give is this sort of stable view onto things. And the other thing that we’re planning to do as part of this is to have these kind of moments in time. So at the end of the year we’re going to say, right this cut is where we are at that point is going to be Baseline 24 and that will be a static line. That will be whats in Baseline at this point in time. And then in a years time we’ll do Baseline 25. And I think an interesting thing then will be the difference between those two points because I think a conservative web team could say, "Right, I am sticking with Baseline 24" even though maybe they’re well into 25, we’re sticking with this.

Rachel: But the things between those two lines then I think become the things that you might want to make judgments on rather than having to look at the entire web platform and say, "Oh, can I use this? Can I use that?" And say, "Well, we’re going to use this yearly cut of Baseline." And then the things that came after that that are in Baseline as it moves forward we’ll take a look at and see, oh, I can polyfill that or this is fine as a progressive enhancement.

Drew: It puts me in mind slightly of things like Ubuntu Linux distribution and their long-term support releases that they do.

Rachel: Right.

Drew: They’ll say, "This is the one that we offer long-term support. It’s stable, it’s reliable to use." And so you might adopt that and that doesn’t mean that you wouldn’t necessarily install a couple of key extra, more frequently updated packages or whatever, but you know that the system that you’re working with is sort of frozen in time and supported and is a known quantity going forward.

Rachel: Yeah.

Drew: I guess those who work in very regulated industries who sort of frequently go under contract with customers or suppliers, whatever, to say they’ll provide compatibility with certain browsers as it is at the moment. Surely this would be a very welcome change because these are actually more concrete measures that support can be tied to and it’s a stability that’s more in line with the stability of a binding agreement than an arbitrary version number that some nerd in Silicon Valley might attach to a build of a browser.

Rachel: Right.

Drew: So you can say our platform is targeting Baseline 24 and you could keep that way for three, four years maybe.

Rachel: Yeah.

Drew: And then review it and update.

Rachel: Yeah, I like that. I like that stuff, yeah, the idea, this is a sort of stable thing and I think that that yearly release will become, I think, quite important. So I think I can see libraries and frameworks and so on tying themselves essentially to a stable release, one of the yearly cuts and then moving on. And I think it should be really interesting as well being able to see, well actually how has the platform moved on between those two yearly points? We don’t really have a look at that at the moment. I mean you could work it out, but it’d be quite a lot of work. It’d be nice just to be able to see that and see how things are changing.

Drew: I always enjoy a list of features that are included in whatever. Heres things that you can use that you won't, perhaps weren’t aware of. And I can see how a big list of Baseline features might highlight different things that an individual developer might not be aware of that-

Rachel: Yeah.

Drew:span> ... have arrived on the web platform and are ready to be used.

Rachel: Yeah, I mean the awareness is a big thing. I mean, I’ve been doing, me and a colleague as well have been doing talks, whats new on the web platform type talks and typically introducing things that are interoperable. And every time there will be people saying, "Oh, I never knew you could do that", or "I never knew that worked. I thought that was an experimental thing." And then realizing that it’s actually a feature that’s in all engines. And I think that that’s very, very common. So I think that’s the other sort of side of this is that it also raises awareness of features that now are interoperable, that people have got an idea that the web platform moves incredibly slowly.

Rachel: I think particularly people like us who’ve been doing this for a long time and remember those days. And so people are very surprised, you know, you still see people saying about a new feature, "Oh well it’ll be five years before I can use that." And yet you’re looking at things like container queries and cascade layers. All of these things landed cross browser very, very quickly, which is great. And I think that’s a story that this can help tell as well.

Drew: So this was a big announcement from Chrome at the big Google I/O conference, but you mentioned it’s not just a Google thing is it, there are other parties involved. So who is deciding whats in the collective Baseline? What parties are involved in this?

Rachel: Right, yeah, so I mean obviously we partnered very closely with Mozilla and MDN in launching this. So that actually during the developer keynote we launched this on web.dev and on MDN at the same time on a select number of pages because we haven’t got a full feature site yet. But it was nice to actually show what it would look like rather than it being a kind of theoretical thing. And also MDN published a blog post about it too and their thinking. But yeah, the work has been done within the Web DX community group and that group has representatives from all of the browsers and various other people including interested developers.

Rachel: Anyone can join that group and be part of those discussions. So that’s where we’re also asking people to go and comment on this stuff rather than, I mean people are very welcome to come and talk to me about it, but in terms of getting sort of information out there and discussed by the wider group, raise issues on the Web DX community group site because that’s where the people are who are making the decisions. And at the moment it’s just fantastic to be getting the feedback into that group so that we can actually see is this solving a problem, what problems maybe we’ve missed and be able to talk about that.

Drew: So it’s a broader community effort, but it just so happens that the major players Google, Mozilla and everything are putting a lot of time and effort into it and really backing it as an idea.

Rachel: Yeah, yeah. And I think that’s something that as DevRel, you know, as developer relations, that’s kind of what we do. We try and bridge the gap between browser engineers and spec writers and the developer community. And so I think that’s something that we can do as DevRel for the web is to actually bring forward these things that we think might help and see where we can take them.

Drew: Now I’ve heard about the Interop 2022 and now 2023 initiatives. Does Baseline relate to Interop at all? Or maybe you could talk us through that where it fits in?

Rachel: Yeah, I mean it’s kind of the same group of people certainly as Google who are involved with those projects. So the Interop project takes a set of features that if it’s based on web platform tests, so it takes a set of features that have some sort of interoperability problem. So it might be that they don’t work in one or more browsers or they have sort of bugs that are causing pupil problems. So we’ve got this set of features and then over the year all of the engines work to implement or fix those things. So we’ve kind of got a score, a scoreboard where you can go and look and see how everyones doing.

Rachel: So the Interop project works to fix known issues, either make things interoperable or fix books and things that look on paper like they work, but have some sort of problems. And so that project is getting more things essentially into Baseline. So they’re linked in that way and they’re a lot of the very similar people are working together on those from the browsers. So I think in terms of the relationships there and the fact that Interop did bring, for the first time, all of the vendors together in this sort of common goal to make the platform better, theres definitely a link there in terms of this is what we care about. Whereas Baselines kind of from the other side, it’s saying, well, okay, what is there? What is interoperable? What can we already use? So yeah, hopefully things like Interop will help to add more things to Baseline as we go along.

Drew: So it is basically just identifying things that could potentially go into Baseline, might be nearly there, and then swarming on those features to get them across the line and get them interoperable and usable on the platform because they’re seen as important or significant in some way.

Rachel: Yeah, and I mean we know that that developers aren’t going to use things in general unless they are available across all engines. So it’s kind of in everyones interest to work together to get to that point because then people use the stuff that we’re building so that, yeah, it’s said so they kind of work very well together. And I think it’s just this sort of spirit of collaboration and trying to make things better for developers.

Drew: We’ve talked about how developers might target, in past, a browser version and now we’re saying would target Baseline, but it works the other way around, doesn’t it? If the frameworks and the tools that we are using as dependencies in our projects, they can also declare that as a level of support. Is that right?

Rachel: Yeah, absolutely. I think that’s something that we’d love to see how a framework or whatever you could say, everything that is used by this framework is Baseline or is Baseline 24 or what have you. That’s going to give a lot of clarity to developers to not then need to fish around in the framework and find out what they’re doing to make sure 'cause if you’ve got to do a certain level of browser support in your project, you need to make sure that everything you use also has that level of browser support so that it could definitely make that a lot clearer.

Rachel: And I think also things like publishing articles. One of the things that frustrates people, and I know as someone who writes and edits a lot of content, is if people get halfway through an article and then they find something that is experimental or is so new or only works in Chrome or whatever, that’s really frustrating because you think, oh, I’ve found the thing that helps me solve my problem. You’re working through it and then you’re like, oh, that’s not coming 'til next year. And so have been able to put on an article, everything in this article is in Baseline. That gives you a lot of confidence to go forward. So I think theres lots of uses for this out in the community and that’s something we really hope will happen, that just to give that kind of clarity to developers.

Drew: It’s that last section of an article, isn’t it? You’re reading along about some interesting technology and then it comes to the section of how you might work around it for the browsers that don’t support it.

Rachel: Yeah.

Drew: I thought-

Rachel: Exactly.

Drew:span> ... we were into a good thing here.

Rachel: Yeah, 'cause when you’re searching, you’re searching to solve a problem, things come up. It’s very frustrating if you realize that it’s a year away or other browsers have said we’re not doing that or whatever, you know? So yeah, I think theres a lot of opportunities for clarity for people who are writing and for developers of libraries and frameworks to actually just make it very obvious to developers what the status is.

Drew: And things like WordPress themes for example, or any of these sorts of things where you’re taking somebody elses code and making it part of your project to know that what level of support in terms of web functionality is in that is invaluable. I guess it would make sense for things like tools that any tool that gives you code to embed into your site, be that a Stripe checkout or a live chat widget or any of those sorts of things, I guess it would make sense for them to declare their state of compatibility too.

Rachel: Yeah, yeah, it’s just kind of a shorthand. It saves you having to do all of that investigating for each thing that you use. And we know that every website these days has tons and tons of third party stuff in it. We’re not all sitting down with Notepad anymore and carefully crafting our websites. So I think anything that makes that easier and allows people to show the status of things is really helpful.

Drew: It actually is a really simple concept, isn’t it, to say heres the set of features, they’re well supported, we’re giving it a label, we’re documenting it. It’s actually so simple, it’s really rather genius I think. It’s some amazing work that’s been done there by everyone involved.

Rachel: Yeah, I think it speaks to a lot of what I’ve thought about over many years in terms of that kind of clarity. And that’s always been my thing is making things clear to people, making things seem straightforward rather than trying to make things complex. And so I really love being able to be involved with this and bring it forward.

Drew: The HTML spec for example has a process for an element or an attribute to be deprecated. So things get removed from the spec as they become obsolete or they’re replaced by a newer specification. Is it possible for features to drop out of Baseline once they’ve been included?

Rachel: It could be possible. It’s one of the things we’ve talked about a lot. I think really the devil will definitely be in the detail with all this stuff. And that’s one of the things is well what happens if something essentially gets broken? Maybe one engine does something which causes a problem with something. There is a possibility that yes, we’d have to remove something. That’s definitely something we’ve talked about. I mean hopefully browsers aren’t going around breaking stable features, but it is a possibility or something might get deprecated although we tend not to fully remove things from the web platform very often. It’s more that we say, "Yeah, maybe don’t use this," but there is a possibility that something that is in Baseline could start to have a problem because of something that one of the engines does.

Drew: I guess then that’s one area where these sort of yearly cuts as you’ve described them, become sort of quite useful in that something might have appeared in Baseline 24 but then in Baseline 30 it might be gone and there is a way of having a distinction there.

Rachel: Yeah, and it would also highlight that stuff I think a lot more clearly than we have a way of doing at the moment because I think hard to know what things have actually been deprecated on the platform. A lot of things that are deprecated are things that are only in one engine and therefore would never have been in Baseline in the first place. But yeah, it is possible as things move forward that that would happen and it would make it clearer.

Drew: And such as the way of the web, we do deprecate things, but as you say, they don’t ever go away really.

Rachel: Yeah.

Drew: We don't-

Rachel: I was just saying maybe don’t use—

Drew:span> ... tend to remove things, you know, can still use the, I’m guessing you can still use HTML font tags because we don’t break things once they’re standardized.

Rachel: Yeah.

Drew: Even though nobody would ever recommend using them, they’re still going to work in your browser because sites have been developed to that standard and the browser-

Rachel: Yeah.

Drew:span> ... will continue to support it. I guess, in a way, theres Baseline forms a little bit of a positive pressure. If a feature does get broken, then the fact that it was in Baseline and the whole community is relying on it being there is a factor in prioritizing what gets worked on by that particular maintainer of that browser engine. They’re going to see that, no, this is important, we need to fix it pretty quick.

Rachel: Yeah.

Drew: So hopefully it’s a sort of positive pressure in that regard. There seems to be so much really in development and coming to the web platform. Are there any particular things that you’re really looking forward to seeing becoming interoperable in the coming months?

Rachel: Yeah, I mean theres a bunch of interesting stuff. I’ve always been interested in the things that look at things that developers are already doing. So they’re using JavaScript to do it, or what have you, and then having them built into the platform because obviously things that are built into the platform we can build in things like accessibility and also performance. Things that tend to perform an awful lot better if they’re a built-in feature as opposed to being JavaScript on top. So theres sort of interesting stuff from the open UI group. The next thing that is about to land in Chrome is the Popover API. And of course popovers are something like everybodys building all the time.

Drew: Yeah.

Rachel: And I think a lot of these open UI things are very much those sorts of features that pretty much every developer, every front end developer has built on numerous occasions. And every front end developer has tried to solve the accessibility issues and the performance issues and the sort of weird bugs that come up when they interact with other things. And so the fact that these are getting actually built into browsers, I think, is very exciting because it just, it’s a bunch of work you don’t have to do and it’s probably going to have better accessibility and so on than most people are going to be able to manage for themselves and it gives something to build on top of as well, you know, can add things to them.

Rachel: So yeah, so I’m excited to see Popover and in a similar sort of vein is the work on scroll-driven animations because that’s a thing that people like to do and is very hard to do well, you know, having things that animate on scroll and that, again, is something that is coming in. It should be in Chrome 115. So it’s, again, it’s these things that we’re doing on the front end of the web and we’re actually able then to build into the browser. I’m always very keen to see those 'cause I think they solve a lot of problems.

Drew: Yeah, definitely. I mean anywhere where a developer has to mimic something that you think is native browser UI and you’re trying to build it yourself, there are so many places to go wrong, aren’t there?

Rachel: Yeah.

Drew: If you’ve ever had any of your work through an accessibility audit, you know that it’s things like modal dialogues and all these sort of things that constantly will contain flaws that need to be addressed because theres just so many things to think about in terms of keyboard focus and clicking away and all these different subtleties that you need to make sure that you take care of, that is, as much as anything, as much as it being bad for accessibility, if you get it wrong, it’s a massive waste of time for all us developers doing this all ourselves over and over again when it just makes sense. Most apps will have some sort of modal or popover functionality. So yeah, it makes complete sense for it to be part of the platform implemented by the browser vendors in a way where it’s accessible and it’s just a good solid layer to then build on top of in terms of styling and yeah-

Rachel: Yeah.

Drew:span> ... it makes total sense. It’s a exciting way to see the platform go.

Rachel: Yeah and I think, because the other thing with everyone building their own thing is that a lot of people don’t build their own thing, they rely on a third party thing and quite often things people are relying on are actually really old and they haven’t been updated to, they might have issues with accessibility or whatever and they haven’t really been updated for more modern browsers. And so it’s sort of, I think the more that people can use whats built into the browser, the sort of better experience that the end user of the site is likely to have.

Drew: So your team at Google maintains a bunch of resources to help developers keep up-to-date with the web platform. What are those resources and where should people go to look and find things? What would they expect to find there?

Rachel: Yeah, so we’ve got web.dev and developer.chrome.com are our two sites that DevRel own. It used to be, back in the day, when I sort of arrived, there was a real mixture of things on each site and a sort of thing that was commonly said was that Chrome were using web.dev to pretend things that were only in Chrome were stable APIs, lets say I don’t think anyone ever intended to pretend that. I think there was just a slightly disorganized content strategy. So as kind of part of the preparation for Baseline, because I wanted to make sure that we could be clear because if we’re talking about developer clarity, it’s pretty bad if all of our stuffs in a mess. I started moving content. And so now, certainly all the newer content, there may be some older stuff that we haven’t tracked down, but the newer content, if you go to web.dev, you should really be seeing stuff about stable APIs.

Rachel: So things that are interoperable and also things that are coming onto the platform. I do a sort of whats new on the web platform that includes some new stuff from all engines. So that kind of looking at what the broader landscape is and also things like our best practices. So things like about performance, which while some of the tooling is Chrome-only, raising the performance of your site, it is going to help in all engines. So that’s whats there on web.dev. So that’s kind of the practical side of things. You’re building a website, you want some advice. That’s what we’re doing there. And I try very hard to make that about the web, not about Chrome and that’s the sort of content there.

Rachel: But obviously we are a team that’s supporting Chrome and supporting the things that Chromes releasing and so we do that over on developer.chrome.com. So that’s going to be your new APIs. You want to find out about popover that’s landing, there’ll be an article about that soon. So all the things that Chrome is doing for the web, essentially you can find on developer.chrome.com. So that will be experimental things or Chrome-only things, things that are Chrome-only for now, all that stuff is there. And I hope that brings a bit of clarity to our content and that we’re not trying to pretend anything. We’re just trying to be clear about what we’re doing and how well supported it is.

Drew: Great. So we’ve been learning all about Web Platform Baseline. What have you been learning about lately, Rachel?

Rachel: Theres always something interesting to learn about. I’ve done a couple of things. I’ve been learning Python because it’s a language that I, for whatever reason, never learned. I’ve learned various languages over the years, but I do less web development these days and more kind of comparing of data sets and Python is the language that a lot of that stuff is done in. So it’s quite fun to learn new language anyway and it’s useful for the sort of stuff I tend to find myself doing these days.

Rachel: And I’ve also been thinking a bit about the whole generative AI space and in particular as a content lead, how do we prepare our content to make it more useful to those kind of models because theres a lot of stuff about asking questions of a chatbot and so on. And so I’ve been kind of just starting to read around that subject a little bit and start to see, well, if we’re preparing content, how can we be making that more useful for that kind of thing and that interaction?

Drew: If you, dear listener would like to hear more from Rachel, you can find her on the web at rachelandrew.co.uk where you’ll find links to her socials, her writing and numerous other projects. And you can find her writing regularly about the web platform at web.dev. Thanks for joining us today, Rachel. Did you have any parting words?

Rachel: Let us know about Baseline. Comment and raise some issues, or just join in the chat on the Web DX community group, on the GitHub repo there. We’d really like to hear what you think. This is, we’ve been talking about it internally for a long time and so now we’ve got it out there and I think the work starts now and the discussion with the community starts now. And so we’re all very, very excited to read the feedback and find out what you think.

Resources

]]>
hello@smashingmagazine.com (Drew McLellan)
<![CDATA[Practical Design Tips And Guidelines For Beginner Designers]]> https://smashingmagazine.com/2023/05/practical-design-tips-guidelines-beginner-designers/ https://smashingmagazine.com/2023/05/practical-design-tips-guidelines-beginner-designers/ Mon, 22 May 2023 11:00:00 GMT As someone who has worked with beginner designers for decades, I have observed a certain common problem. They often get overwhelmed with design tasks such as creating presentations, a personal website, mocking up an app idea, or even making menus or cards. It’s not due to a lack of ability and skills but rather because of unfamiliarity with the rules and systems that graphic designers are trained to understand.

To fill this gap, I have compiled some simple principles that are quick and easy to learn yet can greatly enhance any design project. In this article, we’ll focus on the four key elements of good design. Here they are:

  1. Structure,
  2. Spacing,
  3. Rhythm,
  4. Contrast.

By learning these simple concepts, anyone should be able to create effective designs that not only look good but also cater to diverse audiences, including those with disabilities. Are you ready to learn the power of these fundamental design concepts? If yes, follow me along!

Structure

Structure helps people absorb information. You see the toolbar before you can discern any individual tool in it. You recognize a form before you can identify a specific field in it. The proper structure helps you access information faster. It’s the main menu, the selection screen, before you dive in.

Let’s review an example. Imagine you are making a registration page for a webinar. The webinar page should contain the following information:

  • The event title,
  • A short synopsis,
  • The price of admission,
  • The time and date of the event,
  • The name of the host,
  • A photo of the host,
  • A short bio for the host,
  • Some social links for the host,
  • A register button,
  • A call-out: “Join me live!”.

Ask yourself, if you had to group those things into several groups/buckets, what would those be? How would you name each group/bucket? Here are the groups I would make, and it might not be precisely how you would group the information, but that’s OK; the idea is to just give you a start!

Headline Host Logistics
Call-out Host name Event date
Title Host photo Time & duration
Synopsis Host bio Price
Social links Register button

Now that we have those buckets let’s continue: In what order do we want the audience to ingest them? My choice would be the headline first, then the logistics, with the big register button. I would keep the host information more as a detour, something optional to dive into if you would like to learn more.

Compare these two versions. Isn’t it easier to ingest the information in the one on the right?

As you can see, the structure of the information greatly influences your ability to absorb it. I will go one step further. I believe beauty is an artifact of well-organized information.

In short, don’t try to make something beautiful. Try to make something well-organized, and watch the beauty emerge!

“A well-designed, user-friendly information architecture ensures that users spend less time and effort searching for information and are successful in finding what they need. Key information-architecture tasks include identifying common features in content, forming groups of similar information objects, and linking documents to other documents on the same topic.”

— Anastasia Stefanuk, “The Importance of Information Architecture to UX Design
Spacing

Spacing in graphic design refers to the arrangement of elements within a layout. Proper spacing can make your designs more visually appealing and easier for viewers to comprehend. It helps direct attention, creates hierarchy, and improves legibility.

Let’s look at another example. On the left, all the gaps are the same. On the right, the spacing is influenced by the structure. The inner group elements are more tightly grouped, creating a visible “scaffolding” for the content.

In print, space is expensive because the paper is not free. This is why in the past, only “luxury magazines” could afford those beautiful margins and airy layouts. Nowadays, white space is virtually free. Luxury for everybody!

"White space is the area between design elements. It is also the space within individual design elements, including the space between typography glyphs (readable characters). Despite its name, white space does not need to be white. It can be any color, texture, pattern, or even a background image. White space is a great tool to balance design elements and better organize content to improve the visual communication experience.”

— Mads Soegaard, “The Power of White Space in Design

The Space Must Flow

Consistent spacing around elements gives visual harmony and unity to your work. It ensures a coherent look across all the components of your design. This is because irregular gaps draw the viewer’s attention away from the main focus, which is counterproductive. By maintaining even spacing between all objects, you allow the content to communicate without interference.

Here are some tips for creating a nice flowing space between and around elements.

Centering In A Box

Sometimes you need to put an image (like a logo, for example) inside a box. In this case, leave at least half its shortest dimension as a minimum margin. E.g., if the image is wide, use half its height as a minimum margin. If the image is tall, use half its width.

Vertically Centering Text

Most of the text mass sits between the baseline and the top of the lowercase letters — squint your eyes slightly, and you will see what I mean. When vertically centering text, a good rule of thumb is to center the lowercase letters’ height. In the designer speech, it is called the “x-height.” Some typefaces might need a tiny vertical adjustment, but we’re going for simple here.

“Why is it called ‘x’ height? Because the letter ‘x’ is the only letter in the alphabet that has all its terminals touch both the baseline and the meanline, with no extending points. Curved letters such as a, c, e, o, r, or s usually pass the font’s x-height slightly.”

— Sirine Matta, “What Are Cap Height and X Height in Typography?

Text In A Shape

When centering text inside a box (like a button or a label), you need to use the height of the lowercase letters to do the centering and make sure there is the same amount of space all around the text, not just above and below.

Centering Polygons

When centering a polygon, center the circle passing by each point instead of centering the box around the polygon. Once again, this will ensure the space around the shape is nice and uniform.

Nested Rounded Boxes

The rounded boxes are great. They don’t have sharp edges. They feel more human, more organic as if these digital shapes have been sanded down or eroded, like pebbles.

However, when putting a rounded box into another one, a mistake I have seen often is to use the same border-radius on the box outside and the box inside. If you do this, the band (represented here with a colored gradient) will not have a consistent width. Instead, you need to use proportional radius when doing such rounded box nesting.

Rhythm

Human eyes get bored quickly. To keep their interest, you need to give them something new to parse and explore at regular intervals. But you don’t want to make it too hard either, too “new” every time — just a gentle walk along an interesting variety of patterns.

Rhythm contributes to a smooth flow that engages and holds the viewer’s interest while communicating effectively. A visual rhythm also provides directional cues to guide readers through the content.

Well, if you think about it, your slide deck is the same thing, and so is your portfolio, your app onboarding flow, and so many other things. Create variety constrained by a simple set of rules. Start with just left and right, and maybe one day, try using thirds if you feel comfortable. It’s not hard, you’ll see!

“Rhythm in art and design refers to a relationship between elements that creates a sense of harmony. Rhythm can be seen in patterns, in relationships between colors and shapes, and in repetitions of lines and forms. Rhythms help to guide the viewer’s eye around a piece rather than allowing the eye to settle on a particular focal point.”

— “Rhythm,” part of the “Principles of Design” BBC series

Repetition Is Key To A Rhythm

Repetition is key to a rhythm. Reusing elements such as colors, shapes, and text styles can create a sense of unity and coherence within your designs, making them feel more organized and visually appealing. It also makes your choices more intentional.

For example, I call a book-end any element that can be repeated to signal the beginning and the end of something. Shapes and colors are a great way to express creativity and bring some life to your content! It helps to tell your audience that a topic has ended in a nice and subtle way.

The repetition of certain visual elements creates patterns that catch our eye, engage our brains, and hold our interest, which can make any design feel more dynamic, expressive, and aesthetically pleasing. Find the guitar riff of your content; find the chorus!

Here is a meta example. These are the elements I reused throughout this article to create a sense of unity between all the illustrations. I had to pick some colors, decide on a style for the arrows, and make many more design choices. A bigger version of this is usually called a design system.

Reading Rhythm

When you are feeding words into people’s retinas, you need to find the right compromise between the size of the text chunks and the effort required to move the eye to the next line. That sounds mechanical because it kind of is. Too long a line, and it is hard to locate the beginning of the next line. Too short a line, and your reading gets interrupted too often for an eye carriage return.

I usually find that between eight and twelve words per line are a good goal for the main text block. Just count the words in a couple of sentences — no need to be too strict. Also, keep in mind that it’s a good rule for English, but other languages might have different sweet spots. For a sidebar, a caption, or something narrower, you can aim for five to six words.

Try reading some text in the example. You will see how much easier it is for your eyes to follow when the lines are not too long!

Before going further, I’d like to quote Gary Provost, an American writer and writing instructor, whose words on rhythm are spot-on:

“This sentence has five words. Here are five more words. Five-word sentences are fine. But several together become monotonous. Listen to what is happening. The writing is getting boring. The sound of it drones. It’s like a stuck record. The ear demands some variety. Now listen. I vary the sentence length, and I create music. Music. The writing sings. It has a pleasant rhythm, a lilt, a harmony. I use short sentences. And I use sentences of medium length. And sometimes, when I am certain the reader is rested, I will engage him with a sentence of considerable length, a sentence that burns with energy and builds with all the impetus of a crescendo, the roll of the drums, the crash of the cymbals–sounds that say listen to this, it is important.”

Gary Provost

Not So Justified

Justified text appears aligned on both sides. This is a very desirable attribute, but it’s very hard for it to happen naturally. Editing and design tools all do a pretty bad job of creating justified text that looks good. Instead, you will need to fine-tune letter spacing, word spacing, use good hyphenation, and sometimes even rewrite your text. Your job is to avoid text gaps that will hurt the pace of reading.

So, unless you spend the time to do the work manually, I suggest that you use left-aligned text instead. The price to pay for this shiny right edge is not worth it!

Contrast

Contrast refers to the use of differences (in color, size, shape, weight, direction, or texture) to attract attention, create hierarchy, enhance readability, and even evoke emotions like calm or energy. If a rhythm is the beat, contrast is the melody.

Mind The Low Contrast

To ensure your content is accessible to everybody, including people with visual impairments, text on a colored background should have enough contrast to be easily readable. There are plenty of apps or plugins that can calculate that for you (I’ve included a few references at the end of the article, check Further Reading). For now, all you really need to know is that a contrast ratio of 4.5 or higher is recommended.

Text Size Contrast

Text size can be used as a very convenient tool for structuring information. A well-structured text can significantly lower the effort required for the viewer to ingest the information. The contrast between each heading level should be high enough for the structure to be visible.

If possible, a consistent ratio between different header levels usually looks more elegant. In general, the weight of the text (bold, regular, light) decreases as the level increases.

Text Weight Contrast

Sometimes, the typeface you are using comes with many weights, like medium, semi-bold, light, and so on. In those cases, it’s usually recommended to skip one weight when pairing them to create enough contrast between them.

Avoid Unintentional Contrast

Combining text with an icon can help comprehension, improve visual appeal, or increase the importance of an element. However, always make sure the thickness of the text matches the thickness of the icon. That will make it feel as if it were part of the typeface. It does look more elegant, but it also saves your audience from even thinking about it, which is a recurring theme in graphic design. Good design is invisible!

“Although we can’t help but notice poor design, good design is much harder to detect. We’d go even further: good design is invisible. This raises a logical question: what, then, is good design? Is it when form follows function? Is it the perfect arrangement of design elements on a modernist grid? Is it 24-point headings and 8-point body text? Is it a solid set of principles that you work to every time on every job? The answer might surprise you — good design is all of this. And none of it.”

— Damien Stanyer, “Why is good design invisible?
Conclusion

That might have surprised you, but creating and iterating on designs isn’t about making things pretty. Your job as a designer is to lower the cognitive load for people to ingest the information, create a rhythm that keeps your viewer engaged, and make sure everybody can access the content.

“The most important advice I would give to emerging designers — even those that have already won awards — is that learning never stops. You can always improve your craft, whether you’ve won one award or twenty. Remember, you’re never going to be an expert in everything. I have worked on so many things, from book design to exhibition design, hospitality, tech, and everything in between — and I’ve taken something new from the experience every time. Now I’m the executive creative director at a global agency, and I still find myself learning something new every day.”

— Lisa Smith, “Learning Never Stops

Next time you are faced with the need to design something, I hope those tips will make you feel a little bit more confident and comfortable! And remember, everybody can be a designer, but every designer has a lot to learn and to keep learning. It’s a process that never stops.

Further Reading

I created a list of additional resources about the topics that I have tried to cover briefly so far. (And if you think this list is missing something of value, please do leave a comment at the end of the article. Thank you! 💖)

]]>
hello@smashingmagazine.com (Laurent Baumann)
<![CDATA[How To Deal With Big Tooling Upgrades In Large Organizations]]> https://smashingmagazine.com/2023/05/big-tooling-upgrades-large-organizations/ https://smashingmagazine.com/2023/05/big-tooling-upgrades-large-organizations/ Wed, 17 May 2023 10:00:00 GMT If you work in software development, you probably know a thing or two about using and maintaining third-party packages. While third-party tooling has its fair share of downsides, there are plenty of advantages as well. The efficiency you get from code that someone else has already written speeds up development and is hard to deny. Sure, there are all sorts of considerations to take in before plopping code from a third party — accessibility, technical debt, and security, to name a few — but the benefits may make taking on those considerations worthwhile for your team.

Upgrades are also part of that set of considerations. Usually, your team may treat this sort of maintenance as a simple task or chore: upgrading dependencies and (automatically) validating that all of the features keep functioning as expected. You probably even have automated checks for keeping all package versions up to date.

But what if the third-party tooling you adopt is big? I mean big, big. That’s common in large organizations. I happen to work for a fairly large organization that leverages big third-party resources, and upgrading those tools is never as simple as running a package update and moving on. I thought I’d share what’s involved in that process because there are many moving pieces that require ample planning, strategy, and coordination. Our team has learned a lot about the process that I hope will benefit you and your team as well.

Some Context On My Organization

I work for Jumbo Supermarkten in the Jumbo Tech Campus (JTC), which is a department of over 350 developers working in agile teams on a range of digital products that help facilitate our core grocery and e-commerce processes.

We have a variety of responsibilities, where 70% of the work is allocated to the primary objectives for each team, and the remaining 30% is dedicated to anything a team wants, as long as it is beneficial to the JTC, which is very useful if you want to deliver value outside of your own team.

When we look at maintaining tooling and packages, balancing the goals of each team with the goals of JTC means that teams effectively maintain their own codebases while also collectively maintaining internally shared codebases that serve as the tooling and foundation of our applications.

Centralized Code As A Bottleneck

To build our applications with consistent standards, we rely on an internal design system and the component library we call Kompas (Dutch for “Compass”). We have built this system ourselves and rely on Vue to render our components and build interactions. Kompas is a hard dependency for virtually all of our applications to ensure uniformity.

This project was not allocated to a dedicated team. Instead, we adopted a strategy that introduced plenty of guidance to allow all front-end developers to contribute. Any developer can add new components to the library as well as features to existing components and keep everything in sync with the designs.

Teams normally work on business features since product owners love delivering customer value. The way we set up our process would allow a team to, in one sprint:

  • Make the required change in Kompas,
  • Have it reviewed by peers from both inside and outside a particular team,
  • Publish the latest version of the component library, and
  • Use that version in that team’s own application to deliver to the end user.

We can only do this with automation on repetitive processes — linting, formatting, quality assurance, testing, visual comparisons, and publishing — in order to provide enough room for developers to contribute to the process. Our component library is very much a living document of our design system, with multiple minor releases and patches a week. With semantic versioning, we can keep our own applications up to date easily and with confidence.

For bigger undertakings, such as setting up visual snapshot tests, we established temporary working groups alongside our existing teams that we called “front-end chapters” where members join on a voluntary basis. In these meetings, we discuss what needs to be done, and in the available 30% of free time we are allotted, the members of these teams carry out the work and report back to the chapter.

As you can imagine, we’ve spent a lot of time and effort ensuring the quality and making it a reliable part of our landscape.

This all began when Vue was in Version 2. That’s the version we baked into Kompas, which means we effectively forced our whole application landscape to follow suit. This worked perfectly for us; people could focus on their team’s needs while leaning on the support of the entire front-end chapter that works on Kompas.

Following the Vue ecosystem that we introduced, Vuex and Nuxt became part of our environment. And then Vue 3 was announced, and it was a massive breaking change from Vue 2! With the announcement, the end-of-life date for Vue 2 was set for December 31, 2023. We still have some time as of this writing, but the news had a massive impact that cascaded throughout our organization.

We Needed A Strategy

We needed to upgrade Vue from 2 to 3. The first thing that we needed to figure out was when we could reasonably start the process. To assess and strategize, we formed a small virtual team of developers consisting of members from various teams so that multiple perspectives were represented.

We figured that there would be a period of time when we would need to support both versions in order to allow time for migrating between teams. It would be nearly impossible to orchestrate a monolithic release. Thus, we prefer gradual incrementing over massive sweeping changes. On the other hand, having to maintain two versions of Vue for, basically, the same business feature presented costs in time and complexity.

So, in order to execute this process as responsibly as possible, we set out to figure out when we could start, taking into account the longevity of maintaining two codebases while getting early experience from upgrading. We started to map the different tech stacks for each team and plotted out potential bottlenecks for the sake of making the process of our work as widely visible as possible. At this time, our organization had a very flat structure, so we also needed to get internal stakeholders (i.e., product owners, architects, and managers) involved and convey the effect this upgrade would have on teams across the board.

Creating A Map

With our starting point set, we move on to establish a direction. Not having a dedicated team did pose some challenges because it meant that we needed to align everybody in a democratic way. This is, in Dutch culture, also known as polderen:

We try to find consensus in a way where everybody is equally happy, or unhappy, about the final direction.

And this can be challenging in a department that consists of many cultures!

One thing we knew we could rely on was the published best practices from official Vue resources to guide our decision-making process. Referencing the documentation, we did notice opportunities for incremental upgrades. The release of Vue 2.7 (Naruto) was really helpful in the sense that it backported features from Vue 3 back to a Vue 2-compatible version.

We also noted that in our landscape, not all applications were actually using Nuxt. A stable release of Nuxt 3 would be a prerequisite for those applications to even be considered for migration since the Vue major version is tightly coupled with the Nuxt major version. Luckily, some applications in our landscape are standalone Vue apps. These are ideal candidates for the first Vue 3-compatible components.

But first, we would need to have components that were compatible with Vue 3.

The Big Divide

By this point, we were confident enough to get to work. We had a plan and clear strategy, after all. The first order of business was to make sure that our component library was compatible with Vue 3, preferably while minimizing duplicative efforts.

We found a really nice way of doing this:

We created a new workspace called “Kompas-next” next to the regular components folder, which was scaffolded out using Vue 3. Then we imported the components from the original library.

This only works because:

  • The backported features in Vue 2.7 allowed us to move closer toward the Vue 3 composition API (among other things).
  • The component syntax between Vue 2 and Vue 3 isn’t radically different anymore.
  • Vue Demi allowed us to convert components, one by one, to be compatible with both versions.
  • We made sure that Kompas-next runs isolated tests to ensure stability.

We did have to slightly modify each and every component to adapt to the new standards. We’ll get to that process in a minute.

That said, we were able to publish two versions of our component library: one that is compatible with Vue 2 (Kompas) and one that is compatible with Vue 3 (Kompas-next). This, in turn, meant that the teams that did not have Nuxt as a dependency could potentially start migrating!

Getting Organized

Up to this point, most of the groundwork had been done in a relatively small team. We were in charge of the investigations, communication, and alignment. But we still needed to get stuff done — a lot of stuff!

With every developer being able to contribute, we came to an agreement that fits with the way everybody was already contributing to the component library:

If you touch a component that is not yet compatible, convert it to be compliant with both Vue 2 and Vue 3 using Vue-demi. Add the existing component with tests to the imports of the Kompas-next folder.

Having communicated this strategy early in the process, we immediately saw the Kompas-next library growing. The Vue core team has put so much effort into closing the gap between the two versions, which made our lives much easier.

Feedback From Early Adopters

The teams that were not blocked by a Nuxt 3 release could spend their time migrating their complete app to Vue 3, providing feedback along the way on how we were setting up our packages and converting components.

Seeing the first applications using Vue 3 was a milestone we could all be proud of since we managed to reach it together, collaboratively, and with a united strategy. The strategy worked for us because it closely resembled the way we were already working.

There were indeed some components that were not migrated using this strategy, which indicated to us that they were stale in terms of development. We reasoned that “stale” equals “stable” and that it would be perfectly fine to migrate them by manual assignment and distribution since we can expect it to be close to a one-off migration per component.

We also started to add Vue 3-specific capabilities to our component library, such as our own composables. I think that’s a nice testament to the investment and adoption by our front-end chapter.

With the component library now supporting Vue, we cleared a significant migration hurdle in our organization. We enabled teams to start migrating to Vue 3, and we encouraged new applications to use the latest standards. As a result, we could start thinking about a deprecation path for our Vue 2 codebase. We were cautiously optimistic and aligned the end-of-life date for Kompas with the same date for Vue 2: December 31, 2023.

So, yes, we are not yet finished and still have work to do. In fact, we had…

Two (Minor) Elephants In The Room

To support communication between micro-applications that run on our e-commerce domain, we had resorted to using Vuex in the past. We used to register stores globally so other applications could dispatch actions and retrieve a shared state. This is now gradually being migrated in the sense that we are replacing Vuex with Pinia for internal state management.

For cross-app communication, we are in the process of decoupling Vuex as an external interface and promoting the use of custom events tied to a specific domain. This prevents us from locking ourselves out of future state management tooling.

We are also in the process of preparing our Nuxt applications to be cleared for migration as well. Within our e-commerce domain, we’ve been building specific modules that take a lot of overhead out of our hands: They handle tasks like setting meta headers, security, and analytics. These are being rewritten to use plugins rather than modules. The impact of this breaking change is smaller because it is limited to the teams that use these modules. We see that these teams are using a similar strategy, albeit on a smaller scale, to organize and structure the tasks at hand.

Looking Back

I believe we have a few key takeaways from how we upgraded (and continue to upgrade) from one version of a large third-party resource to another within our large network of teams and shared codebases. I believe the lessons we learned are relevant beyond Vue and can be applied to the processes of other large organizations migrating between versions of a core piece of architecture.

Let’s review what we learned:

  • Ensure the transition period is clear and as short as possible.
  • Facilitate breaking the work down into small steps that progress iteratively and solicit feedback from those involved in the process as early and as often as possible.
  • Onboard key stakeholders to make sure your team has ample time and resources to do the work.
  • Define a strategy that fits with your organization’s culture.
  • Foster a collaborative mindset and establish clear communication between teams.
  • Celebrate wins, even the smallest ones!
The Work Is Never Done, Really

As I mentioned earlier, maintenance is a never-ending piece of the software development process. As Vue creator Evan You stated in the State of the Vuenion 2023, Vue plans to ship more frequent updates and releases. This will keep impacting our work, but that’s okay. We have a plan and blueprint for future releases.

We’re not there yet, but we now know how to get there!

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Joran Quinten)
<![CDATA[Design Patterns Are A Better Way To Collaborate On Your Design System]]> https://smashingmagazine.com/2023/05/design-patterns-collaborate-design-system/ https://smashingmagazine.com/2023/05/design-patterns-collaborate-design-system/ Tue, 16 May 2023 11:00:00 GMT True collaboration to create or maintain a design system is really important to making superb product design, but working with other humans is always tricky. The collaborative nature of a design system can have a lot of pitfalls. In its best form, it is the product of close alignment between developers and designers, but it doesn’t always happen that way.

Some painful memories:

  • A brilliant designer I worked with made a gorgeous new set of elements and examples for the company to use, but the other designers in the company ran into many situations where it was more expedient to just copy and remake (i.e., detach) the component. The design system was used less and less since contributing to it was always a lower priority than working on a product opportunity.
  • A developer I worked with built the design system components such that the padding in every text input, button, and so on always had to be the same in every layout, leading to awkward results when (for example) a button included double-byte characters, only icons, or just longer labels.
  • I did a lot of work on variations of nested components (button bar, toggle buttons, segmented controls) that were designed to use the same style properties as base components (like a button), but the developers I was working with made brand new components for each that didn’t. So, I had to document and specify the many, many identical sets of style values for many, many sets of slightly different components.

The list goes on. I’m sure you have your own examples.

Getting Aligned

I’ve worked in many kinds of teams, in large companies and start-ups, where these collaboration issues kept getting in the way, even (or especially) with very talented and smart individual contributors. Getting aligned with my teammates doesn’t happen automatically or just because we go to lots of meetings. In fact, it’s very easy to start a project together and get pretty far into it before finding out we all had very different ideas about what we were doing. When it comes to complex questions of re-using an existing component vs. making something new or how to stay on the same page without blocking each other, alignment takes practice for any team.

The method for making design systems I’ll talk about probably works best in environments where you are a sole designer (or among a small number of designers) on a cross-functional team, including front-end or full-stack developers, led by a product owner. You might collaborate with other designers in other teams, but this is your “first team.” In this context, you have a lot of freedom but also a lot of responsibility.

You need an idea for nurturing the design system that doesn’t depend on organizational mandates or a specific “process” and one you can apply yourself. After all, a design system is a product with users, and we know how to balance the user’s needs with product opportunities, right? (Yes!)

The approach described below is not common or widely used, but in my experience, it has solved many team collaboration problems, including:

  • Eliminating the “hand-off” step: a truly perverse mini-waterfall built into many relationships between designers and developers.
  • Ensuring that all designers and developers contribute to the design system as a part of regular product work.
  • Connecting design systems to product impact: measurably speeding things up by making more reusable elements and modules in design and development.
A New Use For An Old Idea

What has worked for me in these kinds of teams is a twist on an old idea: design patterns. Elsewhere, design patterns are described as “a toolkit of solutions to common problems” (Refactoring.Guru) or “description or template for how to solve a problem that can be used in many different situations” (SourceMaking). My favorite definition of the concept is from The Timeless Way of Building:

“Even the most complicated, sophisticated things are defined by a small number of composable patterns.”

— Christopher Alexander

You probably don’t think of your own design activities as a “pattern-making” practice, but the idea has a lot of very useful overlap with the practice of making a design system. The trick is to collaborate with your team to find the design patterns in your own product design, the parts that repeat in different variations that you can reuse. Once you find them, they are a powerful tool for making design systems work with a team.

Design Patterns For Design Systems

To see how, first, let’s get specific about the definition of “design pattern” in our product design context. Let’s call a “design element” a small isolated component like a “button,” “chip,” or “card,” and let’s describe a design pattern as a reusable combination of elements for a purpose, a larger module that can do some product experience work on its own.

The elements are the focus of the design systems in most companies I have worked at, and creating them is important and a lot of work. I am sorry to say, however,

Having a good set of elements doesn’t help you get the value out of a design system, save you much time, or by itself ensure designers and engineers are aligned.

For this reason (and the availability of great existing elements from Tailwind, Bulma, Skeleton, or of course, MUI) I have de-emphasized them in my own work, often just restyling elements created by others. The elements are important, and you do need a set that everyone uses, but they don’t do the work of implementing a feature or valuable experience.

You might be thinking that many of these systems do come with combinations of elements, like the “pre-built components” that MUI ships with for a “Data Grid” or the “blueprints” in the Salesforce Lightning Web system for a “List Builder.” Are these the patterns that can help us?

Unfortunately, they are not. These are patterns for sure, but they probably aren’t useful as-is for you. Your product has its own needs. You can use them as a starting point, but in my experience, it takes longer to rework them into something that solves the problem.

To be useful for you, a design pattern has to come out of and express some reusable part of your particular product experience — those parts of the design you find yourself making again and again.

Here are some examples of these useful, product-specific design patterns in products:

  • A tile in a TV app, which people use to browse things to watch in lists. This is sort of a “card” pattern, but not really! Every streaming service has its own particular kind of tile and includes different content and controls that suit that product best.

  • A dashboard meter in a data-visualization app like Google Analytics. Again, this is sort of a “panel” pattern, but not really! Each part of the dashboard might have different kinds of meters, with titles, category labels, “big numbers,” charts, text snippets, or filtering controls, and the number of elements in a meter varies by app.
  • A tree view in a social genealogy app that lets users see relationships between people in a way that adapts for display on small devices. Some products focus on researching your family, others on visualizing relationships.

In each of these cases, designers and developers made their own product-specific patterns. Those patterns are valuable because once a team has defined them, the next project that the team does gets easier. They develop and grow a kit of parts that save them time (and that they can polish and refine). The patterns, not the elements, are the heart of this (better) kind of design system.

Taking this a step further, I would say that a lesson from these patterns is that

All designers and developers can make their design system better and more effective by focusing on patterns first (instead of the elements), making sure that each is completely reusable and polished for any context in their product.

Pattern work can be a fully integrated part of both getting some immediate work done and maintaining a design system.

Design Patterns For Collaboration

This kind of design pattern activity can be a direct path for designers and developers to collaborate, to align the way things are designed with the way they are built, and vice-versa. For that purpose, a pattern does not have to be a polished design. It can be a rough outline or wireframe that designers and developers make together. It needs no special skills and can be started and iterated on by all. And collaborating on this form of a design pattern makes it possible for designers and developers to work in parallel.

That’s all pretty abstract. It’s easier just to try an example.

A Design Pattern Story

Let’s say that we’re on a team together, working on an app called “WeTrip.”

The product opportunity comes from the reality we have all probably dealt with: when a group of people or a family travels together, they usually have a lot of trouble deciding what to do or where to eat.

This app makes group travel easier by giving people an easy way to propose and vote on the plan for each day. Instead of having to have long conversations that feature such sentences as “I dunno, what do you want to do?” travelers have a tool so people can have less trouble with the logistics of a vacation or a trip.

The app has some seed funding, but in order to survive needs some “minimum viable” version of itself to prove that it’s something needed and valuable. Everyone on the team wants to get going! Nobody wants to be waiting for a design.

The designers, engineers, and product people all meet and pick the names of some basic objects and their properties. They start with a “Person,” someone on the trip who votes on places to go together for a meal or sightseeing. They sketch things out on a whiteboard.

This is their first pattern.

They move on, describing things like a “Place,” a location someone wants to visit.

And an “Occasion” pattern, a time the group will do things together like eat, and so on.

The process can work with a physical whiteboard, shared document, collaboration app, or whatever. All that is important is that everyone participates so they are aligned and get the details they need to start work.

With this rough outline, they can see that some of the parts of these patterns are elements they can pull from existing design systems.

They decided to use some restyled MUI elements. Those have defined properties (named attributes of a component, like “color” or “content”) already and will be a nice shortcut. They pull them into Figma (their design tool of choice) and development (a React web app with the MUI library as a dependency). They add some of these MUI elements and their standard properties to each pattern.

For each pattern, they create a page in a shared Notion document that everyone can edit and update. They start by adding properties from the MUI elements they’ve chosen.

The team combines the properties from the MUI elements with others they’ve sketched out and flattens the properties a bit. They group the properties so that it’s clear what is most important and secondary.

The Figma component will have a different variant for each important property (like activity or actionsAvalable). And each of the element properties will become part of the component in development, of course. In this way, the design and development are aligned — not necessarily completely the same in every detail, but in the ways that matter, moving in the same direction.

The team talks about more ideas for each pattern. Adding properties doesn’t mean they will appear in the final design, just that an idea could be part of the experience, so it’s a low-stakes conversation where final decisions don’t need to be made.

After going through the same process for the “Place” and “Occasion” patterns, the designers and developers have a lot of what they need to make progress. They have agreed on the names of things and what the important properties are. The patterns are defined in a form that the whole team can see and edit, and they start work.

An engineer might stub out a “Person” component like the one below while a designer is sketching it out in Figma with no bottlenecks.

Of course, the engineers figure out that there are some properties they need that they missed at first, like a presence property for a user (after all, in order to know how to show a user notification, it helps to know if that user is using the app right now or a notification would be better). They add that to the document and message the rest of the team.

At the same time, the designers are fleshing out the patterns, using the MUI Figma library where possible, and adding new components where needed. When the team sees the addition of a presence property to the Person pattern, they decide to make a presence indicator and group it with the primary elements. As long as they are keeping the simple pattern document up to date, there is no handoff or waiting around.

Sometimes there are big questions to resolve about the experience. But that is the occasion for the next meeting about what the primary views in the app should be.

The team meets again and comes up with a “People” view (a list of people on the trip, with their status), a “Schedule” view (with a list of occasions and the plan for each day), and a “Proposals” view (to see and propose places to go) — more design patterns. For this pattern documentation, one of the product owners wants to use a wireframing tool instead of an outline (as in the previous example). That’s fine. Pretty much anything works to describe patterns as long as it shows elements and groups (and it’s what a team likes using).

In these patterns, lists of Person, Occasion, and Place patterns are nested inside of each view. It becomes clear that there will have to be two versions of a Person pattern in the app, so the property is added to that pattern.

As they work, if an engineer gets a little ahead of whatever design work exists, they can either use standard MUI components or add a proposal to the patterns document. And designers can add new components if the design starts to need them. All parties are able to make changes without blocking each other.

Organizational Needs

Now, I should pause here to note that there are plenty of teams where this kind of pattern definition is not the primary product definition activity and where other stakeholders (engineering managers and so on) have a say in how the design system is built. Not all teams are small and have as much ownership. Even in larger companies, however, I believe design patterns can be very useful and help make a case for development work (since they show how the design system is helping teams get things done). But in those contexts, design patterns may be small parts of other organizational processes and not as important overall.

In this small team, the struggle to justify time on a design system is for a different reason: there’s a great temptation to put it off because everyone wants very badly to ship something sooner, and it feels “extra.” But every member of the team also knows that the minute they want to iterate on their product, that work will be easier if they have created a design system, and so they keep their good practice going.

Putting It All Together

Once the patterns are established as a way for the entire team to collaborate, final designs and views come together more quickly (because the entire team was able to start together and finish together). The visual design happens at the end rather than being a bottleneck before people can start work.

Integrating Patterns In Product Planning

As working with patterns becomes more established and mature, patterns can be broken out into separate repositories and polished on their own, either as a separate library in Figma or a set of modules in development. If a team gets larger, there might even be an official owner for each pattern who handles bugs or polishes details.

Each pattern properties list can be turned into an API once a module is trustworthy to use in new design and development. In the WeTrip example, the scrolling list of places for today is reused in a search results view when it’s added later.

Since patterns are by their nature only the reusable parts that save designers and developers time, patterns can make working on the design system something with a hard-core product improvement impact. Re-use can be captured as a key metric and a factor in prioritization. The amount of reusable work being generated can be tracked automatically in design and development tools (much like test coverage).

If the reuse of patterns becomes common enough that it needs careful management, patterns can become part of a federated module build process (like what is built into WebPack 5). In short: patterns lend themselves very well to being part of the toolchain of many modern development processes.

Your Turn

I imagine that there are many teams that already practice some of these concepts in the collaboration between design and development, and I am very eager to hear about that! It has been a happy improvement as a method for myself and the people I have worked with, and I would love to hear your stories.

For others, I hope this has been a good introduction to a vision for you and your team of an alternate reality where you can seamlessly collaborate on design systems without as many pitfalls. I wish the best to you and your team as you find your way to such harmony and a happy design outcome!

]]>
hello@smashingmagazine.com (Ben Clemens)
<![CDATA[Solving Media Object Float Issues With CSS Block Formatting Contexts]]> https://smashingmagazine.com/2023/05/media-object-float-issues-css-block-formatting-contexts/ https://smashingmagazine.com/2023/05/media-object-float-issues-css-block-formatting-contexts/ Mon, 15 May 2023 13:00:00 GMT Let’s imagine we’re making a small component. It can be anything, really, but let’s use a media object as an example. Nicole Sullivan had a solid definition of media objects from way back in 2010, and you probably already know the pattern well: some form of media (often an image) on the left and text beside it on the right. The media could be an image or a video, for example.

This is the basic HTML for the layout, minimized for brevity:

<section class="container">
  <article class="float-left">
    <img src="https://picsum.photos/100">
      <p>I've never had to cook or clean since I discovered Xyz. They perform all my tasks for me. I recommend them.</p>
      <h3>Dan Somore</h3>
  </article>

  <!-- more articles -->

</section>

This HTML gives us a <section> element that is the container for four <article> elements, where each one is a testimonial container that holds an <img> and a <div> with a block of text — our media objects.

Let’s apply some light styling in CSS:

/* Give the parent container breathing room */
.container {
  padding: 20px;
}

/* 
  Styles for each testimonial container 
  Each container is floated left
*/
.float-left {
  border: 2px solid blue;
  background-color: transparent;
  float: left;
  width: 45%;
  min-height: 150px;
  margin-bottom: 20px;
  margin-right: 20px;
}

/* Testimonial images are floated left */
img {
  float: left;
  margin-right: 10px;
}

This code is by no means perfect. In fact, it introduces the wrapping and overflow issues we’re about to discuss. We will look at these issues together before getting into solutions.

Issue 1: Height Collapsing

When an element is floated in its container, it exits its normal document flow and into a floated position, making no contributions to the container’s height. In a container of many floated media objects, the container element’s height is collapsed to contain only non-floated elements. The collapsed height might be inconspicuous in containers without a border or non-floated elements and could disrupt the layout of other elements after a media object container. However, this issue can be easily discovered if there is a non-floated element in the container, among other floated elements.

Let’s add a border to the parent container to see the height-collapsing effect.

The height of the content is what influences the height of the testimonial container. If the image were in the container’s flow, it would be taller than the text, and the container would adjust to it. But, alas, that’s not the case since we introduced a block formatting context when floating the image.

The popular solution with a single line of CSS on the testimonial’s parent container:


.container {
  overflow: auto;
}

The BFC this generates establishes a new document flow within the page’s root element, containing all the container's child elements, including floated media objects. It effectively prevents the testimonial elements from being displaced beyond the parent container’s borders — no extra divs or pseudo-elements are needed like the clearfix approach.

See the Pen Float Solutions: overflow: auto [forked] by Geoff Graham.

That certainly gets the job done! But I want to show you one more way to do this because I believe it’s the best of the bunch.

The Best Solution: display: flow-root

display: flow-root was introduced to address inconsistencies associated with using overflow for generating BFCs. In fact, display: flow-root was explicitly designed to produce BFC, while the overflow property is designed to manage content that surpasses its container. Consequently, overflow can induce unintended side effects, from unwanted scrollbars to data loss.

That’s why I recommend using display: flow-root. It is meant to create a BFC when you need it, whereas the other solutions are more like workarounds.

Conclusion

CSS block formatting contexts are great because they allow you to leave the main document flow, allowing elements to interact differently in a layout. But, of course, those different interactions can feel like buggy behavior if you’re unaware that you’re actually working in a different formatting context.

This is exactly why we have modern layout techniques like Flexbox and Grid. Before we had them, floats were a nice trick for faking columns. But the BFC they created wasn’t so nice. Hence clever workarounds like the clearfix to create a BFC to wrangle the other BFC.

Perhaps the bigger takeaway from all this, though, is to evaluate your layout strategy. If you’re reaching for a float, is it really the best option for what you’re trying to do? Because if so, you may as well embrace the natural text-wrapping behavior rather than trying to fight it. And if you don’t want to fight it, that’s a sure sign you ought to reach for a more modern layout technique, like Flexbox or Grid.

Resources

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Gabriel Shoyombo)
<![CDATA[Designing Sticky Menus: UX Guidelines]]> https://smashingmagazine.com/2023/05/sticky-menus-ux-guidelines/ https://smashingmagazine.com/2023/05/sticky-menus-ux-guidelines/ Fri, 12 May 2023 11:00:00 GMT We often rely on sticky headers to point user’s attention to critical features or calls to action. Think of sidebar navigation, CTAs, sticky headers and footers, “fixed” rows or columns in tables, and floating buttons. We’ve already looked into mobile navigation patterns in Smart Interface Design Patterns, but sticky menus deserve a closer look.

As users scroll, a sticky menu always stays in sight. And typically, it’s considered to be a good feature, especially if the menus are frequently used and especially if we want to speed up navigation.

However, sticky menus also come with a few disadvantages. In his recent article on Sticky Menus Are Problematic, And What To Do Instead, Adam Silver argues about some common usability issues of sticky menus — and how to solve them. Let’s take a closer look.

When Sticky Menus Are Useful

How do we decide if a menu should be sticky or not? This depends on the primary job of a page. If it’s designed to primarily convey information and we don’t expect a lot of navigation, then sticky menus aren’t very helpful.

However, if we expect users to navigate between different views on a page a lot and stay on the page while doing so — as it often is on long landing pages, product pages, and filters — then having access to navigation, A-Z or tabs can be very helpful.

Also, when users compare features in a data table, sticky headers help them verify that they always look at the right piece of data. That’s where sticky headers or columns can help and aid understanding. That’s why sticky bars are so frequently used in eCommerce, and in my experience, they improve the discoverability of content and speed of interaction.

Keep Sticky Headers Small, But Large Enough To Avoid Rage Taps

The downside of sticky menus is that they typically make it more difficult for users to explore the page as they obscure content. Full-width bars on mobile and desktop are common, but they need to be compact, especially on narrow screens. And they need to accommodate for accessible tap sizes to prevent rage taps and rage clicks.

Typically, that means we can’t have more than five items in the sticky bar navigation. The choice of the items displayed in the sticky menu should be informed by the most important tasks that users need to perform on the website. If you have more than five items, you probably might need to look into some sort of an overflow menu, as displayed by Samsung.

Whenever users have to deal with forms on a page on mobile, consider replacing sticky menus with accordions. Virtual keyboards typically take up to 60% of the screen, and with a sticky bar in view, filling in a form quickly becomes nothing short of impossible.

Accessibility Issues of Sticky Menus

By their nature, sticky menus always live on top of the content and often cause accessibility issues. They break when users zoom in. They often block the content for keyboard users who tab through the content. They obscure links and other focusable elements. And there is often not enough contrast between the menu and the content area.

Whenever we implement a sticky menu, we need to make sure that focusable elements are still visible with a sticky menu in action. And this also goes for internal page anchors that need to account for the sticky bar with the scroll-padding property in CSS.

Avoid Multiple Scrollbars Of Long Sticky Menus

When sticky menus become lengthy, the last items on the list become difficult to access. We could make them visible with some sort of an overflow menu, but often they appear as scrollable panes, causing multiple scroll bars.

Not only does this behavior cause discoverability issues, but it’s also often a cause for mistakes and repetitive actions on a page. Ideally, we would prevent it by keeping the number of items short, but often it’s not possible or can’t be managed properly.

A way out is to show the menu as an accordion instead in situations when the space is limited, especially on mobile devices. That’s what we do at Smashing Magazine in the checkout, with a button that reveals and hides the contents of the cart when needed.

Partially Persistent Menus

Because sticky menus often take up too much space, we could reveal them when needed and hide them when a user is focused on the content. That’s the idea behind partially persistent headers: as a user starts scrolling down, the menu disappears, but then any scrolling up prompts the menu to appear again.

The issue with this pattern is that sometimes users just want to jump back to a previous section of the page or double-check some details in a previous paragraph, and the menu often gets in the way. Page Laubheimer from NN/Group recommends using a slide-in animation that is roughly 300–400ms long and will preserve the natural feel without being distracting.

Alternatives To Sticky Menus

In some situations, we might not need a sticky menu after all. We can avoid their downsides with shorter pages, or lengthy pages which repeat relevant calls-to-actions or navigation within the page.

We could display a table of contents on the top of the page and bring the user’s attention to the table of contents with a back-to-top link at the bottom of the page.

Wrapping Up

Whenever the job of the page is to help users act, save, and compare, or we expect users to rely on navigation a lot, we might consider displaying sticky navigation. They are most harmful when there isn’t enough space anyway, as it often is with forms on mobile devices.

Sticky menus do come at a cost, as we need to account for usability and accessibility issues, especially for zooming, keyboard navigation, and anchor jumps. Add them if you need them, but be careful in plugging them in by default.

We need to prioritize what matters and remove what doesn’t. And too often, the focus should lie entirely on content and not navigation.

You can find more details on navigation UX in the video library on Smart Interface Design Patterns 🍣 — with a live UX training that’s coming up in September this year.

Further Resources

Of course, the techniques listed above barely scratch the surface. Here are wonderful articles around sticky headers, from design considerations to technical implementations:

]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[How To Boost Your Design Workflow With Setapp]]> https://smashingmagazine.com/2023/05/boost-design-workflow-setapp/ https://smashingmagazine.com/2023/05/boost-design-workflow-setapp/ Thu, 11 May 2023 14:00:00 GMT This article is a sponsored by Setapp

As someone who wears multiple hats, it is challenging to balance a full-time job, freelance projects, and all sorts of creative endeavors.

This is how I started off: By day, I’m a full-time product designer. By night, I juggle all sorts of freelance work and creative projects.

I am currently self-employed. However, there are challenges that come with being my own boss: Working with clients, sales and negotiation, invoicing, building a personal brand, crafting a content strategy, time tracking, project management… The list goes on.

Trying to keep up with everything used to be tough. No matter how hard I tried, my to-do list always seemed never-ending. I was constantly feeling overwhelmed.

I thought to myself, “There’s got to be a better way.”

After analyzing my workflow, I realized that many tasks could be simplified or automated so that I could save time, focus on high-value tasks, and work fewer hours.

After years of trial and error, I discovered a range of tools and strategies that helped me save time and stay organized to focus on what really matters.

The apps mentioned in this guide are available on Setapp. Whether you’re a Mac user or not, these hacks will help you get more done in less time and improve your quality of life. I hope you find value in this guide.

Streamline Your Workflow With the Best Apps

You can use Setapp to access 240+ apps on your Mac and iPhone under a single monthly subscription.

Personally, I use Setapp to do three things:

  1. Try out apps that could help save time. Some of these apps cost more than Setapp’s subscription, so it’s a relief that I do not need to pay for each one individually.
  2. For apps that I only need to use occasionally, I can quickly install and uninstall them as needed, with no extra cost. This saves me precious space on my Mac and ensures that I’m not cluttering up my system with unnecessary apps.
  3. Since Setapp’s library is updated regularly, I always get to try out new apps to further enhance my workflow.
Track Time & Eliminate Distractions

As a freelance designer, I need to track how much time I spend on each project to calculate my billable hours. I used to manually create events on my calendar and calculate the hours spent on each project. It’s a waste of time, and sadly, it is inaccurate.

To solve this problem, you can use Timemator to track your time accurately and minimize distractions.

With Timemator, you can set up auto time-tracking rules for specific apps, files, or websites. For example, you can set rules so that the timer starts tracking when you work on a specific project on Figma or Adobe Photoshop.

The timer runs quietly in the background so that you can stay focused without any interruptions. You no longer need to manually start or pause the timer.

Pro tip: Use it to reduce distractions! Set up auto-tracking to track how much time you spend on meetings, talking to teammates or clients on Slack, or watching Netflix.

To help you identify where you’ve spent your time, Timemator gives detailed reports and analytics so you can reduce or eliminate time-wasting activities and get more done in less time.

The Only Font Manager You Need

As designers, we all know that font selection can make or break a creative project.

I was frustrated with Font Book (the default font manager on MacOS). It wasn’t user-friendly. Searching and comparing fonts was a chore.

I found Typeface to be useful — especially when you need to quickly browse through your font collection, customize the preview text and size in real-time, and compare to see how different fonts look side-by-side.

Over the years, I have saved up a huge font library. Typeface is able to load all my fonts quickly and remove duplicate fonts that bloat up my computer. It supports variable fonts and OpenType font features and has robust features for the busy designer.

For fonts you don’t use often, you can choose to activate them only when necessary. This way, your computer stays clean and fast.

As a bonus, you can also easily organize fonts into custom collections or tags.

Fastest Way To Create Device Mockups

When designing, we often need to create high-quality, professional-looking phone, tablet, and computer mockups to showcase our designs.

I used to spend hours searching for device mockup templates and launch Adobe Photoshop in order to use those templates. The whole process was time-consuming, so I switched to a tool called Mockuuups Studio.

All you need to do is drag and drop a screenshot of your website or app into it, pick a scene, and it will generate thousands of mockups. It’s pretty neat.

You can filter through scenes, models, and devices to find the perfect mockup for your digital product. Then, add hands, overlays, realistic shadows, or backgrounds to your device mockups. In the example above, I have filtered ‘iPhone’ mockups only.

Since it’s cloud-based, you can access it anywhere and collaborate with your teammates in real time too.

To further speed up your workflow, you can use their Figma, Sketch, or Adobe XD plugin. This is their Figma plugin:

Create Screenshots & Screen Recordings, Fast

When presenting designs (especially when working remotely), I take screenshots and screen recordings for my clients every day.

But instead of using the default Mac screenshot tool, CleanShot X is a better solution. This is an essential tool for every Mac user.

To quickly take a screenshot, use this shortcut key on your Mac: Command + Shift + 4.

This tool gives you the convenience to record MP4 or GIF with your desktop icons hidden, capture scrollable content, and annotate, highlight, or blur screenshots to hide sensitive personal information.

An example of how I annotate my screenshots:

I’ve used this tool for years with zero complaints. This tool will make your life easier when sharing screenshots with clients or on social media.

A cool feature you’ll also love: You can capture and copy any text, so you’ll never have to manually retype it again!

Your workflow will become much more streamlined and efficient since you no longer get bogged down in the technical details.

Never Waste Time Searching For Meeting Links Again

It’s challenging to keep track of various meetings, their details, and attendees, especially when switching between Google Meet, Zoom, your email inbox, and calendars.

To solve this problem, you can use Meeter to schedule or join meetings with one click right from the menu bar on your Mac.

It supports Google Meet, Zoom, and Microsoft Teams. When you want to join a meeting, you no longer have to waste time searching for meeting links, then copy and paste the link into the browser. Instead, you can now focus on being present in every meeting.

The tool allows you to directly call your FaceTime contacts and phone numbers and jump into recurring calls from the menu bar too. Pretty simple!

Save Time With Spotlight On Mac

When working with multiple files and apps on your Mac, you need to be able to quickly find and access them instead of navigating through different folders.

With Spotlight, you can do these things quickly. While this is not an app, it’s one of the most powerful features on Mac that can save you plenty of time.

To open Spotlight, simply hit Command + Spacebar on your keyboard and start typing.

Then, try these on Spotlight:

  • Perform quick calculations.
    No need to open a calculator app. Simply type in your calculation in Spotlight and hit enter. It’s that easy.
  • Search for apps.
    Quickly find any app on your Mac.
  • Search the internet.
    Type your search term, and it will launch your default browser with the search results. You’ve just saved a few clicks.
  • Find files or folders.
    Type in the name of the file or folder, and you have it.

  • Check the weather.
    Type “weather” followed by your location, and it will give you up-to-date information on the current weather conditions and forecast.

Cool, right? Learning how to use Spotlight effectively is a game-changer. Give it a try, and see how much time you can save.

Design Accessible Interfaces

As a product designer who also builds websites for clients, it’s a challenge to find and create the perfect color palettes while working on multiple projects at once. In the past, I've had to rely on a combination of tools like swatch libraries and notes to keep track of my palettes.

If you’re a designer or a developer, you’ll love Sip — a powerful color picker that can help you design beautiful and accessible interfaces easily.

With Sip, you can quickly grab colors right from the Mac menu bar and drop them into any design or development tool, including Adobe Photoshop, Figma, and Sketch. This makes it easy to create custom color palettes that match the client’s brand.

You can create and save custom color palettes, and the quick access menu that floats on the side of your desktop gives you quick access to your color palettes.

Currently, it supports 24 of the most popular color formats in the industry, like Android XML, CSS hex, RGB, and CMYK.

Now, my favorite feature is Sip’s Contrast Checker. In the example below, you can use the color picker to check the contrast between the gray text and white background, ensuring that it meets accessibility standards and is legible for all users.

Tip: Always make sure the contrast between the text and background is greater than or equal to 4.5:1 for small text and 3:1 for large text. If the color contrast fails, click on the ‘FIX’ button to improve it!

Declutter Your Mac’s Menu Bar

If you have a bunch of apps running on your Mac, your menu bar may be cluttered with all sorts of icons and notifications.

Just like physical clutter, digital clutter takes up mental space and affects your focus, too! To solve this problem, you can use Bartender.

Bartender allows you to organize your menu bar icons into neat and tidy groups or hide them completely — as simple as that. You can collapse your menu bar icons into a customizable dropdown menu so it remains clutter-free.

In the above example, most of my menu icons are hidden, except Figma and the battery level indicator.

After using it for over a month, I am able to focus better. It’s one of those subtle quality-of-life improvements that can have a big impact on your productivity and mindset.

Wrapping Up

I wish I had discovered these tools sooner!

The apps I’ve shared above are available on Setapp. With a single monthly subscription, you get access to 240+ Mac and iPhone apps. They offer a free 7-day trial, so you can try it out and decide if it’s right for you.

These tools have completely transformed my workflow and helped me become more productive and less stressed. I hope that these tools will do the same for you so you can make the most of your time. After all, time is a limited resource, and it's up to us to use it wisely.

Thank you for reading. Have a productive day!

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Rachel How)
<![CDATA[A Guide To Redux Toolkit With TypeScript]]> https://smashingmagazine.com/2023/05/guide-redux-toolkit-typescript/ https://smashingmagazine.com/2023/05/guide-redux-toolkit-typescript/ Wed, 10 May 2023 10:00:00 GMT If you are a React developer working on a complex application, you will need to use global state management for your app at some point. React Redux is one of the most popular libraries for state management used by many developers. However, React Redux has a complex setup process that I’ve found inefficient, not to mention it requires a lot of boilerplate code. The official developer of Redux developed the Redux Toolkit to simplify the process.

This article is for those with enough knowledge of React and TypeScript to work with Redux.

About Redux

Redux is the global state management library for React applications. If you have used useState() hooks for managing your app state, you will find it hard to access the state when you need it in the other parts of the application. With useState() hooks, the state can be passed from the parent component to the child, and you will be stuck with the problem of prop drilling if you need to pass it to multiple children. That’s where Redux comes in to manage the application state.

Introducing Redux Toolkit

Redux Toolkit is a set of opinionated and standardised tools that simplify application development using the Redux state management library.

The primary benefit of using Redux Toolkit is that it removes the overhead of writing a lot of boilerplates like you’d have to do with plain Redux.

It eliminates the need to write standard Redux setup code, such as defining actions, reducers, and store configuration, which can be a significant amount of code to write and maintain.

Jerry Navi has a great tutorial that shows the full Redux setup process.

Why I Prefer Redux Toolkit Over Redux

The Redux Toolkit has several key features which make me use this library over plain Redux:

  1. Defining reducers
    With Redux Toolkit, you can specify a slice with a few lines of code to define a reducer instead of defining actions and reducers separately, like Redux.
  2. Immutability helpers
    Redux Toolkit includes a set of utility functions that make it easy to update objects and arrays in an immutable way. This makes writing code that follows the Redux principles of immutability simpler.
  3. Built-in middleware
    Redux Toolkit includes built-in middleware that can handle asynchronous request tasks.
  4. DevTools integration
    Redux Toolkit includes integration with the Redux DevTools browser extension, which makes it easier to debug and analyse Redux code.
Using Redux Toolkit To Build A Project Issue Tracker

I think the best way to explain the value and benefits of using Redux Toolkit is simply to show them to you in a real-world context. So, let’s develop an app with it that is designed to create and track GitHub issues.

You can follow along with the code examples as we go and reference the full code anytime by grabbing it from GitHub. There is also a live deployment of this example that you can check out.

Start creating a new React app with the following command:

yarn create react-app project_issue_tracker --template typescript

This generates a folder for our project with the basic files we need for development. The –template typescript part of the command is used to add TypeScript to the stack.

Now, let’s install the dependencies packages required for our project and build the primary UI for the application before we implement Redux Toolkit. First, navigate to the project_issue_tracker project folder we just created:

cd project_issue_tracker

Then run the following command to install Material UI and Emotion, where the former is a design library we can use to style components, and the latter enables writing CSS in JavaScript files.

yarn add @mui/material @emotion/react @emotion/styled

Now we can install Redix Toolkit and Redux itself:

yarn add @reduxjs/toolkit react-redux

We have everything we need to start developing! We can start by building the user interface.

Developing The User Interface

In this section, we will be developing the UI of the app. Open the main project folder and create a new components subfolder directly in the root. Inside this new folder, create a new file called ProjectCard.tsx. This is where we will write the code for a ProjectCard component that contains information about an open issue in the project issue tracker.

Let’s import some design elements from the Material UI package we installed to the new /components/ProjectCard.tsx file to get us started:

import React from "react";
import { Typography, Grid, Stack, Paper} from "@mui/material";
interface IProps {
    issueTitle: string
}
const ProjectCard : React.FC<IProps> = ({ issueTitle }) => {
    return(
        <div className="project_card">
            <Paper elevation={1} sx={{p: '10px', m:'1rem'}}>
                <Grid container spacing={2}>
                    <Grid item xs={12} md={6}>
                        <Stack spacing={2}>
                            <Typography variant="h6" sx={{fontWeight: 'bold'}}>
                                Issue Title: {issueTitle}
                            </Typography>
                            <Stack direction='row' spacing={2}>
                                <Typography variant="body1">
                                    Opened: yesterday
                                </Typography>
                                <Typography variant="body1">
                                    Priority: medium
                                </Typography>
                            </Stack>
                        </Stack>
                    </Grid>
                </Grid>
            </Paper>
        </div>
    )
}
export default ProjectCard;

This creates the project card that displays an issue title, issue priority level, and the time the issue was “opened.” Notice that we are using an issueTitle prop that will be passed to the ProjectCard component to render the issue with a provided title.

Now, let’s create the component for the app’s HomePage to display all the issues. We’ll add a small form to the page for submitting new issues that contain a text field for entering the issue name and a button to submit the form. We can do that by opening up the src/HomePage.tsx file in the project folder and importing React’s useState hook, a few more styled elements from Material UI, and the ProjectCard component we set up earlier:

import React, { useState } from "react";
import { Box, Typography, TextField, Stack, Button } from "@mui/material";
import ProjectCard from "./components/ProjectCard";
const HomePage = () => {
    const [textInput, setTextInput] = useState('');
    const handleTextInputChange = (e:any) => {
        setTextInput(e.target.value);
    };
    return(
        <div className="home_page">
            <Box sx={{ml: '5rem', mr: '5rem'}}>
                <Typography variant="h4" sx={{textAlign: 'center'}}>
                    Project Issue Tracker
                </Typography>
                <Box sx={{display: 'flex'}}>
                    <Stack spacing={2}>
                        <Typography variant="h5">
                            Add new issue
                        </Typography>
                        <TextField 
                        id="outlined-basic" 
                        label="Title" 
                        variant="outlined" 
                        onChange={handleTextInputChange}
                        value={textInput}
                        />
                        <Button variant="contained">Submit</Button>
                    </Stack>
                </Box>
                <Box sx={{ml: '1rem', mt: '3rem'}}>
                    <Typography variant="h5" >
                        Opened issue
                    </Typography>
                        <ProjectCard issueTitle="Bug: Issue 1" />
                        <ProjectCard issueTitle="Bug: Issue 2" />
                </Box>
            </Box>
        </div>
    )
}
export default HomePage;

This results in a new HomePage component that a user can interact with to add new issues by entering an issue name in a form text input. When the issue is submitted, a new ProjectCard component is added to the HomePage, which acts as an index for viewing all open issues.

The only thing left for the interface is to render the HomePage, which we can do by adding it to the App.tsx file. The full code is available here on GitHub.

Using Redux Toolkit

Now that our UI is finalised, we can move on to implementing Redux Toolkit to manage the state of this app. We will use Redux Toolkit to manage the state of the ProjectCard list by storing all the issues in a store that can be accessed from anywhere in the application.

Before we move to the actual implementation, let’s understand a few Redux Toolkit concepts to help understand what we’re implementing:

  1. createSlice
    This function makes it easy to define the reducer, actions, and the initialState under one object. Unlike the plain redux, you don’t need to use a switch for actions and need to define the actions separately. This function accepts an object as a name (i.e., the name of the slice) and the initial state of the store and the reducer, where you define all the reducers along with their action types.
  2. configureStore
    This function is an abstraction for the Redux createStore() function. It removes the dependency of defining reducers separately and creating a store again. This way, the store is configured automatically and can be passed to the Provider.
  3. createAsyncThunk
    This function simplifies making asynchronous calls. It automatically dispatches many different actions for managing the state of the calls and provides a standardised way to handle errors.

Let’s implement all of this! We will create the issueReducer with an addIssue() action that adds any new submitted issue to the projectIssues store. This can be done by creating a new file in src/redux/ called IssueReducer.ts with this code:

// Part 1
import { createSlice, PayloadAction } from "@reduxjs/toolkit"

// Part 2
export interface IssueInitialState {
    projectIssues: string[]
}
const initialState: IssueInitialState = {
    projectIssues: []
}

// Part 3
export const issueSlice = createSlice({
    name: 'issue',
    initialState,
    reducers: {
        addIssue: (state, action: PayloadAction<string>) => {
            state.projectIssues = [...state.projectIssues, action.payload]
        }
    }
})

// Part 4
export const { addIssue } = issueSlice.actions
export default issueSlice.reducer

Let’s understand each part of the code. First, we are importing the necessary functions from the Redux @reduxjs/toolkit package.

Then, we create the type definition of our initial state and initialise the initialState for the issueReducer. The initialState has a projectIssues[] list that will be used to store all the submitted issues. We can have as many properties defined in the initialState as we need for the application.

Thirdly, we are defining the issueSlice using Redux Toolkit’s createSlice function, which has the logic of the issueReducer as well as the different actions associated with it. createSlice accepts an object with a few properties, including:

  • name: the name of the slice,
  • initialState: the initial state of the reducer function,
  • reducers: an object that accepts different actions we want to define for our reducer.

The slice name for the issueReducer is issueSlice. The initalState of it is defined, and a single adIssue action is associated with it. The addIssue action is dispatched whenever a new issue is submitted. We can have other actions defined, too, if the app requires it, but this is all we need for this example.

Finally, in the last part of the code, we export the actions associated with our reducer and the issueSlice reducer. We have fully implemented our issueReducer, which stores all the submitted issues by dispatching the addIssue action.

Now let’s configure the issueReducer in our store so we can use it in the app. Create a new file in src/redux/ called index.ts, and add the following code:

import { configureStore } from "@reduxjs/toolkit";
import IssueReducer from "./IssueReducer";
export const store = configureStore({
    reducer: {
        issue: IssueReducer
    }
})
export type RootState = ReturnType<typeof store.getState>
export type AppDispatch = typeof store.dispatch

This code configures and creates the store using the configureStore() function that accepts a reducer where we can pass all of the different reducers.

We are done adding the reducer and configuring the store with Redux Toolkit. Let’s do the final step of passing the store to our app. Start by updating the App.tsx file to pass the store using the Provider:

import React from 'react';
import { Provider } from "react-redux"
import { store } from './redux';
import HomePage from './HomePage';
function App() {
    return (
        <div className="App">
            <Provider store={store}>
                <HomePage />
            </Provider>
        </div>
     );
}
export default App;

Here, you can see that we are importing the store and directly passing through the Provider. We don’t need to write anything extra to create a store or configure DevTools like we would using plain Redux. This is definitely one of the ways Redux Toolkit streamlines things.

OK, we have successfully set up a store and a reducer for our app with Redux Toolkit. Let’s use our app now and see if it works. To quickly sum things up, the dispatch() function is used to dispatch any actions to the store, and useSelector() is used for accessing any state properties.

We will dispatch the addIssue action when the form button is clicked:

const handleClick = () => {
    dispatch(addIssue(textInput))
}

To access the projectIssue list stored in our reducer store, we can make use of useSelector() like this:

const issueList = useSelector((state: RootState) => state.issue.projectIssues)

Finally, we can render all the issues by map()-ping the issueList to the ProjectCard component:

{
    issueList.map((issue) => {
        return(
            <ProjectCard issueTitle={issue} />
        )
    })
}

The final code for HomePage.tsx looks like this:

import React, { useState } from "react";
import { useDispatch, useSelector } from "react-redux";
import { RootState } from "./redux/index"
import { Box, Typography, TextField, Stack, Button } from "@mui/material";
import ProjectCard from "./components/ProjectCard";
import { addIssue } from "./redux/IssueReducer";
const HomePage = () => {
    const dispatch = useDispatch();
    const issueList = useSelector((state: RootState) => state.issue.projectIssues)
    const [textInput, setTextInput] = useState('');
    const handleTextInputChange = (e:any) => {
        setTextInput(e.target.value);
    };
    const handleClick = () => {
        dispatch(addIssue(textInput))
    }
    return(
        <div className="home_page">
            <Box sx={{ml: '5rem', mr: '5rem'}}>
                <Typography variant="h4" sx={{textAlign: 'center'}}>
                    Project Issue Tracker
                </Typography>
                <Box sx={{display: 'flex'}}>
                    <Stack spacing={2}>
                        <Typography variant="h5">
                            Add new issue
                        </Typography>
                        <TextField 
                        id="outlined-basic" 
                        label="Title" 
                        variant="outlined" 
                        onChange={handleTextInputChange}
                        value={textInput}
                        />
                        <Button variant="contained" onClick={handleClick}>Submit</Button>
                    </Stack>
                </Box>
                <Box sx={{ml: '1rem', mt: '3rem'}}>
                    <Typography variant="h5" >
                        Opened issue
                    </Typography>
                    {
                        issueList.map((issue) => {
                            return(
                                <ProjectCard issueTitle={issue} />
                            )
                        })
                    }
                </Box>
            </Box>
        </div>
    )
}
export default HomePage;

Now, when we add and submit an issue using the form, that issue will be rendered on the homepage.

This section covered how to define any reducer and how they’re used in the app. The following section will cover how Redux Toolkit makes asynchronous calls a relatively simple task.

Making Asynchronous Calls With Redux Toolkit

We implemented our store to save and render any newly added issue to our app. What if we want to call GitHub API for any repository and list all the issues of it in our app? In this section, we will see how to use the createAsyncThunk() API with the slice to get data and render all the repository issues using an API call.

I always prefer to use the createAsyncThunk() API of the redux toolkit because it standardises the way different states are handled, such as loading, error, and fulfilled. Another reason is that we don’t need to add extra configurations for the middleware.

Let’s add the code for creating a GithubIssue reducer first before we break it down to understand what’s happening. Add a new GithubIssueReducer.ts file in the /redux folder and add this code:

import { createAsyncThunk, createSlice } from '@reduxjs/toolkit';
export const fetchIssues = createAsyncThunk<string[], void, { rejectValue: string }>(
  "githubIssue/fetchIssues",
  async (_, thunkAPI) => {
    try {
      const response = await fetch("https://api.github.com/repos/github/hub/issues");
      const data = await response.json();
      const issues = data.map((issue: { title: string }) => issue.title);
      return issues;
    } catch (error) {
      return thunkAPI.rejectWithValue("Failed to fetch issues.");
    }
  }
);
interface IssuesState {
  issues: string[];
  loading: boolean;
  error: string | null;
}
const initialState: IssuesState = {
  issues: [],
  loading: false,
  error: null,
};
export const issuesSliceGithub = createSlice({
  name: 'github_issues',
  initialState,
  reducers: {},
  extraReducers: (builder) => {
    builder
      .addCase(fetchIssues.pending, (state) => {
        state.loading = true;
        state.error = null;
      })
      .addCase(fetchIssues.fulfilled, (state, action) => {
        state.loading = false;
        state.issues = action.payload;
      })
      .addCase(fetchIssues.rejected, (state, action) => {
        state.loading = false;
        state.error = action.error.message || 'Something went wrong';
      });
  },
});
export default issuesSliceGithub.reducer;

Let’s understand the fetchIssues part first:

  1. We are using the createAsyncThunk() API provided by the Redux Toolkit. It helps create asynchronous actions and handles the app’s loading and error states.
  2. The action type name is the first argument passed to createAsyncThunk(). The specific action type name we have defined is githubIssue/fetchIssues.
  3. The second argument is a function that returns a Promise, which resolves to the value that dispatches the action. This is when the asynchronous function fetches data from a GitHub API endpoint and maps the response data to a list of issue titles.
  4. The third argument is an object that contains configuration options for the async thunk. In this case, we have specified that the async thunk will not be dispatched with any arguments (hence the void type) and that if the Promise returned by the async function is rejected, the async thunk will return an action with a rejected status along with a rejectValue property that contains the string “Failed to fetch issues.”

When this action is dispatched, the API calls will be made, and the githubIssuesList data will be stored. We can follow this exact same sequence of steps to make any API calls we need.

The second section of the code is similar to what we used when we created the issueSlice, but with three differences:

  1. extraReducers
    This object contains the reducers logic for the reducers not defined in the createSlice reducers object. It takes a builder object where different cases can be added using addCase for specific action types.
  2. addCase
    This method on the builder object creates a new case for the reducer function.
  3. API call states
    The callback function passed to the addCase method is dispatched by createAsyncThunk(), which updates the different store objects based on the API call states (pending, fulfilled, and error).

We can now use the GithubIssue reducer actions and the store in our app. Let’s add the GithubIssueReducer to our store first. Update the /redux/index.ts file with this code:


import { configureStore } from "@reduxjs/toolkit";
import { useDispatch } from "react-redux";
import IssueReducer from "./IssueReducer";
import GithubIssueReducer from "./GithubIssueReducer";
export const store = configureStore({
    reducer: {
        issue: IssueReducer,
        githubIssue: GithubIssueReducer
    }
})
export type RootState = ReturnType<typeof store.getState>
export type AppDispatch = typeof store.dispatch
export const useAppDispatch = () => useDispatch<AppDispatch>()

We just added the GithubIssueReducer to our store with the name mapped to githubIssue. We can now use this reducer in our HomePage component to dispatch the fetchIssues() and populate our page with all the issues received from the GitHub API repo.

import React, { useState, useEffect } from "react";
import { useSelector } from "react-redux";
import { useAppDispatch, RootState, AppDispatch } from "./redux/index";
import { Box, Typography, TextField, Stack, Button } from "@mui/material";
import ProjectCard from "./components/ProjectCard";
import { addIssue } from "./redux/IssueReducer";
import { fetchIssues } from "./redux/GithubIssueReducer";
const HomePage = () => {
    const dispatch: AppDispatch = useAppDispatch();
    const [textInput, setTextInput] = useState('');
    const githubIssueList = useSelector((state: RootState) => state.githubIssue.issues)
    const loading = useSelector((state: RootState) => state.githubIssue.loading);
    const error = useSelector((state: RootState) => state.githubIssue.error);
    useEffect(() => {
        dispatch(fetchIssues())
      }, [dispatch]);

    if (loading) {
      return <div>Loading...</div>;
    }

    if (error) {
      return <div>Error: {error}</div>;
    }
    const handleTextInputChange = (e:any) => {
        setTextInput(e.target.value);
    };
    const handleClick = () => {
        console.log(textInput)
        dispatch(addIssue(textInput))
    }
    return(
        <div className="home_page">
            <Box sx={{ml: '5rem', mr: '5rem'}}>
                <Typography variant="h4" sx={{textAlign: 'center'}}>
                    Project Issue Tracker
                </Typography>
                <Box sx={{display: 'flex'}}>
                    <Stack spacing={2}>
                        <Typography variant="h5">
                            Add new issue
                        </Typography>
                        <TextField 
                        id="outlined-basic" 
                        label="Title" 
                        variant="outlined" 
                        onChange={handleTextInputChange}
                        value={textInput}
                        />
                        <Button variant="contained" onClick={handleClick}>Submit</Button>
                    </Stack>
                </Box>
                <Box sx={{ml: '1rem', mt: '3rem'}}>
                    <Typography variant="h5" >
                        Opened issue
                    </Typography>
                    {
                        githubIssueList?.map((issue : string) => {
                            return(
                                <ProjectCard issueTitle={issue} />
                            )
                        })
                    }
                </Box>
            </Box>
        </div>
    )
}
export default HomePage;

This updates the code in HomePage.tsx with two minor changes:

  1. We dispatch fetchIssue and use the createAsync() action to make the API calls under the useEffect hook.
  2. We use the loading and error states when the component renders.

Now, when loading the app, you will first see the “Loading” text rendered, and once the API call is fulfilled, the issuesList will be populated with all the titles of GitHub issues fetched from the repo.

Once again, the complete code for this project can be found on GitHub. You can also check out a live deployment of the app, which displays all the issues fetched from GitHub.

Conclusion

There we have it! We used Redux Toolkit in a React TypeScript application to build a fully functional project issue tracker that syncs with GitHub and allows us to create new issues directly from the app.

We learned many of the foundational concepts of Redux Toolkit, such as defining reducers, immutability helpers, built-in middleware, and DevTools integration. I hope you feel powered to use Redux Toolkit effectively in your projects. With Redux Toolkit, you can improve the performance and scalability of your React applications by effectively managing the global state.

Further Reading on Smashing Magazine

]]>
hello@smashingmagazine.com (Deepak Kumar)
<![CDATA[How To Use AI Tools To Skyrocket Your Programming Productivity]]> https://smashingmagazine.com/2023/05/ai-tools-skyrocket-programming-productivity/ https://smashingmagazine.com/2023/05/ai-tools-skyrocket-programming-productivity/ Tue, 09 May 2023 09:00:00 GMT Programming is fun. At least, that’s the relationship I would love to have with programming. However, we all know that with the thrills and joys of programming, there comes a multitude of struggles, unforeseen problems, and long hours of coding. Not to mention — too much coffee.

If only there were a way to cut out all of the menial struggles programmers face daily and bring them straight to the things they should be spending their energy on thinking and doing, such as critical problem-solving, creating better designs, and testing their creations.

Well, in recent times, we’ve been introduced to exactly that.

The start of this year marked the dawn of a huge shift towards Artificial Intelligence (AI) as a means of completing tasks, saving time, and improving our systems. There is a whole new realm of use cases with the rise of AI and its potential to seriously impact our lives in a positive manner.

While many have concerns swirling about AI taking over jobs (and yes, programmers have been raised up), I take an entirely different perspective. I believe that AI has the ability to skyrocket your productivity in programming like nothing before, and over the last couple of months, I have been able to reap the benefits of this growing wave.

Today, I want to share this knowledge with you and the ways that I have been using AI to supersize my programming output and productivity so that you’ll be able to do the same.

Case Study: ChatGPT

If you have missed much of the recent news, let me get you up to speed and share with you the main inspiration for this guide. In late November 2022, OpenAI announced its latest chatbot — ChatGPT, which took the world by storm with over a million sign-ups in its first week.

It was an extremely powerful tool that had never been seen before, blowing people away with its capabilities and responses. Want a 30-word summary of a 1000-word article? Throw it in, and in a few seconds, you’ve just saved yourself a long read. Need an email sales copy for a programming book that teaches you how to code in O(1) speed, written in the style of Kent Beck? Again, it will be back to you in a few seconds. The list of ChatGPT use cases goes on.

However, as a programmer, what really got me excited was ChatGPT’s ability to understand and write code. GPT-3, the model that ChatGPT runs on, has been trained on a wide range of text, including programming languages and code excerpts. As a result, it can generate code snippets and explanations within a matter of seconds.

While there are many AI tools other than ChatGPT that can help programmers boost their productivity, such as Youchat and Cogram, I will be looking at ChatGPT as the main tool for this guide, due to the fact that it is publicly available for free at OpenAI’s website and that it has a very gentle learning curve for a wide range of applications.

And again, before we continue, I would like to re-emphasize that

AI tools such as ChatGPT are meant to streamline your workflow, not take over and replace your thinking and problem-solving.

That being said, let’s see how I used ChatGPT to skyrocket my programming productivity.

Common Problems And How ChatGPT Can Help

To help shine a light on this topic, I have compiled five of the most common ways that I have used ChatGPT to overcome problems that any programmer would experience daily. Those five problems are the following:

  1. Programmer’s block,
  2. Long hours of debugging,
  3. Understanding long documentation,
  4. Developer testing,
  5. Code optimization.

It’s a lot to cover, but I’ll give you real-life examples so that you will be able to take the knowledge from this guide and use it to your own advantage. Let’s jump straight in.

Programmer’s Block

Programmer’s block is the programmer’s equivalent of writer’s block and is one of the most common problems that many programmers, myself included, face regularly. When tasked with a significant programming job, such as getting started with building a multiple-page website in HTML, CSS, and JavaScript, it’s easy to get caught in the weeds of not knowing where to start.

Or you could be knee-deep in a coding project and hit a roadblock in finding a solution for a problem. It’s often a frustrating scenario to be in.

ChatGPT has been an excellent solution for that. Let’s look at the example above in having programmer’s block before embarking on a large programming task. Suppose I’m looking to start a new project. After surfing for software company ideas, I decided to develop a sleek and modern online store in HTML, CSS, and JavaScript to sell my latest programming book.

While ChatGPT won’t be able to hit the mark in producing the entire project, a great way to use it is to generate the skeleton for you to begin. Throwing in a detailed prompt for the task, this is what you get:

Prompt: Can you provide a basic structure for a sleek and modern single online store landing page, with red as the main color, for my new programming book in HTML and CSS?

index.html

<!DOCTYPE html>
<html>
<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  <meta http-equiv="X-UA-Compatible" content="ie=edge">
  <title>Your Book Title</title>
  <link rel="stylesheet" href="style.css">
</head>
<body>
  <header>
    <nav>
      <ul>
        <li><a href="#">Home</a></li>
        <li><a href="#">About</a></li>
        <li><a href="#">Contact</a></li>
      </ul>
    </nav>
  </header>
  <main>
    <section>
      <h1>Programming book title</h1>
      <p>A comprehensive guide to becoming a proficient programmer</p>
      <button>Learn More</button>
    </section>
    <section>
      <h2>Features of the Book:</h2>
      <ul>
        <li>Step-by-step tutorials for learning core programming concepts</li>
        <li>In-depth explanations of commonly used programming languages</li>
        <li>Hands-on exercises and real-world projects to reinforce learning</li>
        <li>Case studies to illustrate the application of programming in various industries</li>
        <li>Tips and tricks for efficient and effective programming practices</li>
      </ul>
    </section>
  </main>
  <footer>
    <p>Copyright 2023</p>
  </footer>
</body>

style.css

body {
  font-family: Arial, sans-serif;
  display: flex;
  flex-direction: column;
  align-items: center;
  margin: 0;
  padding: 0;
}

header, footer {
  background-color: red;
  color: white;
  display: flex;
  justify-content: center;
  align-items: center;
  width: 100%;
  height: 50px;
  box-shadow: 0px 4px 10px rgba(0, 0, 0, 0.25);
}

nav ul {
  display: flex;
  list-style: none;
  margin: 0;
  padding: 0;
}

nav a {
  color: white;
  display: block;
  padding: 10px 20px;
  text-decoration: none;
}

main {
  display: flex;
  flex-direction: column;
  align-items: center;
  width: 100%;
  height: calc(100vh - 100px);
}

section {
  max-width: 800px;
  padding: 40px;
  margin-top: 100px;
  text-align: center;
  background-color: white;
  box-shadow: 0px 4px 10px rgba(0, 0, 0, 0.25);
  border-radius: 10px;
  margin-bottom: 20px;
  margin-left: auto;
  margin-right: auto;
}

h2 {
  margin-top: 40px;
}

ul {
  list-style: none;
  margin-top: 20px;
  padding-left: 0;
}

li {
  margin-bottom: 10px;
}

That gives us the following template to start our work with:

Nothing impressive, but it does provide a great starting point as opposed to a blank HTML file.

Now, of course, I would highly recommend that the most you create with this is the barebones structure. We are using these tools to abstract away menial tasks like creating the HTML structure so that you can focus on more important things like implementing your own CSS styles and HTML features.

HTML and CSS templates are already widely-used concepts. However, with AI, we can now create more personalized templates and basic code structures, getting us from staring at a blank HTML file to a workable skeleton in a matter of minutes.

Use it to create a starting platform for you to get over your programmer’s block, but for the fine details and exact features, your programming knowledge will still be irreplaceable.

Nevertheless, I have been using it to get numerous programming projects up and running. I had this sentence-length counter made from scratch easily within an hour by creating the base template and adding on what I wanted after. I find that being able to jump-start that process makes my programming workflow much more streamlined and enjoyable.

Long Hours Of Debugging

Another common frustration every programmer knows is debugging. Debugging is an extremely time-intensive aspect of programming that can often be very draining and leave programmers at roadblocks, which is detrimental to a productive programming session.

Fortunately, AI is able to cut out a lot of the frustration of debugging, while at the same time, it does not replace the job of programmers in having strong fundamentals in knowing how to debug. At the current time, most AI tools are not able to spot every single flaw in your code and suggest the correct changes to make; hence, it is still essential that you are capable of debugging code.

However, AI is a great supplementary tool to your debugging skills in two main ways:

  1. Understanding runtime errors;
  2. Providing context-aware suggestions.

Understanding Runtime Errors

When faced with errors that you have never seen before in your code, a common reaction would be to hit Google and spend the next chunk of your time surfing through forums and guides to try and find a specific answer for something like the following:

Uncaught TypeError: Cannot read property 'value' of undefined.

Rather than spending your time frantically searching the web, a simple prompt can provide everything you would need for the most part.

Providing Context-Aware Suggestions

The other way in which I’ve been able to get help from ChatGPT for debugging is through its context-aware suggestions for errors. Traditionally, even though we may find the answers for what our program’s bugs are online, it is oftentimes difficult to put the errors and solutions into context.

Here is how ChatGPT handles both of these scenarios with a simple prompt.

Prompt: I found the error “Uncaught TypeError: Cannot read property value of undefined.” in my Python code. How do I resolve it?

With this, I have been able to cut out a lot of time that I would have been spending surfing for answers and turn that time into producing error-free code. While you still have to have good knowledge in knowing how to implement these fixes, using AI as a supplementary tool in your debugging arsenal can provide a huge boost in your programming productivity.

Understanding Long Documentation

Another fantastic way to use AI tools to your advantage is by streamlining long documentation into digestible information that comes when having to use APIs or libraries. As a Natural Language Processing model, this is where ChatGPT excels.

Imagine you’re working on a new web development project, but want to use Flask for the first time. Traditionally, you might spend hours scrolling through pages of dense documentation from Flask, trying to find the precise information you need.

With a tool like ChatGPT, you’ll be able to streamline this problem and save an immense amount of time in the following ways:

  • Generate concise summaries
    You’ll be able to automatically summarize long code documentation, making it easier to quickly understand the key points without having to read through the entire document.
  • Answer specific questions
    ChatGPT can answer specific questions about the code documentation, allowing you to quickly find the information you need without having to search through the entire document.
  • Explain technical terms
    If you are having trouble understanding some terms in the documentation, rather than navigating back to extensive forum threads, ChatGPT can explain technical terms in simple language, making it easier for non-technical team members to understand the code documentation.
  • Provide examples
    Similarly to debugging, you can get relatable examples for each code concept in the documentation, making it easier for you to understand how the code works and how you can apply it to your own projects.
  • Generate code snippets
    ChatGPT can generate code snippets based on the code documentation, allowing you to experiment with use cases and tailor the examples to your specific needs.

It’s like having a search engine that can understand the context of your query and provide the most relevant information. You’ll no longer be bogged down by pages of text, and you can focus on writing and testing your code. Personally, I have been able to blast through numerous libraries, understand and apply them for my own needs in a fraction of the time I normally would.

Developer Testing

Developer testing is one of the cornerstone skills that a programmer or developer must have in order to create bulletproof programs and applications. However, even for experienced programmers, a common problem in developer testing is that you won’t know what you don’t know.

What that means is that in your created test cases, you might miss certain aspects of your program or application that could go unnoticed until it reaches a larger audience. Oftentimes, to avoid that scenario, we could spend hours on end trying to bulletproof our code to ensure that it covers all its bases.

However, this is a great way that I’ve been able to incorporate AI into my workflow as well.

Having AI suggest tests that cover all edge cases is a great way to provide an objective and well-rounded testing phase for your projects.

It also does so in a fraction of the time you would spend.

For example, you are working on the same product landing page for your programming book from earlier. Now, I’ve created a proper product page that involves a form with the following fields for you to process:

script.js

// Get references to the form elements.
const form = document.getElementById("payment-form");
const cardNumber = document.getElementById("card-number");
const expiryDate = document.getElementById("expiry-date");
const cvv = document.getElementById("cvv");
const submitButton = document.getElementById("submit-button");

// Handle form submission.
form.addEventListener("submit", (event) => {
  event.preventDefault();

  // Disable the submit button to prevent multiple submissions.
  submitButton.disabled = true;

  // Create an object to hold the form data.
  const formData = {
    cardNumber: cardNumber.value,
    expiryDate: expiryDate.value,
    cvv: cvv.value,
  };

  // Perform client-side validation on the form data
  if (!validateFormData(formData)) {
    // If the data is invalid, re-enable the submit button and return
    submitButton.disabled = false;
    return;
  }

  // Send the form data to the server.
  // The following is just an example and should not be used in a real-world scenario
  // as it lacks security measures and proper payment gateway integration.
  fetch("/charge", {
    method: "POST",
    body: JSON.stringify(formData),
    headers: {
      "Content-Type": "application/json",
    },
  })
    .then((response) => response.json())
    .then((data) => {
      // Handle the server response.
      if (data.success) {
        // Payment was successful.
        alert("Payment successful!");
      } else {
        // Payment failed.
        alert("Payment failed. Please try again.");
        submitButton.disabled = false;
      }
    })
    .catch((error) => {
      console.error(error);
      alert("An error occurred. Please try again.");
      submitButton.disabled = false;
    });
});

// Perform client-side validation on the form data.
function validateFormData(data) {
  // Example validation checks.
  if (!data.cardNumber || data.cardNumber.length !== 16) {
    alert("Please enter a valid card number.");
    return false;
  }
  if (!data.expiryDate || data.expiryDate.length !== 5) {
    alert("Please enter a valid expiry date in the format MM/YY.");
    return false;
  }
  if (!data.cvv || data.cvv.length !== 3) {
    alert("Please enter a valid CVV.");
    return false;
  }
  return true;
}

Knowing that there are many invalid inputs that customers might use, you want to create some test cases to validate your form’s functionalities. Rather than spending copious amounts of time coming up with numerous test cases, let’s let AI identify any edge cases we missed to bulletproof our tests.

Prompt: Can you help to come up with some edge cases to ensure that the form does not accept any invalid inputs in collecting payment details?

test.js

// Test Case 1: Check if form doesn't submit when fields are empty.
document.querySelector("form").addEventListener("submit", function(e) {
  e.preventDefault();
  var name = document.querySelector("input[name='name']").value;
  var address = document.querySelector("input[name='address']").value;
  var cardNumber = document.querySelector("input[name='card-number']").value;
  var expiryDate = document.querySelector("input[name='expiry-date']").value;
  var cvv = document.querySelector("input[name='cvv']").value;

  if (!name || !address || !cardNumber || !expiryDate || !cvv) {
    alert("All fields are required");
    return;
  }

  // Check if all the fields contain valid inputs.
  if (!validateName(name) || !validateAddress(address) || !validateCardNumber(cardNumber) || !validateExpiryDate(expiryDate) || !validateCVV(cvv)) {
    return;
  }

  // Continue with payment processing.
});

// Test Case 2: Check if name input is valid.
function validateName(name) {
  var nameRegEx = /^[a-zA-Z ]+$/;
  if (!nameRegEx.test(name)) {
    alert("Invalid name. Please enter a valid name");
    return false;
  }
  return true;
}

// Test Case 3: Check if address input is valid.
function validateAddress(address) {
  var addressRegEx = /^[a-zA-Z0-9\s,'-]*$/;
  if (!addressRegEx.test(address)) {
    alert("Invalid address. Please enter a valid address");
    return false;
  }
  return true;
}

// Test Case 4: Check if card number input is valid.
function validateCardNumber(cardNumber) {
  var cardNumberRegEx = /^\d{16}$/;
  if (!cardNumberRegEx.test(cardNumber)) {
    alert("Invalid card number. Please enter a valid card number");
    return false;
  }
  return true;
}

The need to create all sorts of test cases, which is usually a daunting and repetitive task, after a while of working on the same project, can make programming feel very menial. However, with the assistance of AI, this process can be streamlined and, at the same time, improved to cover more bases in your developer testing.

Code Optimization

As it stands, AI still has a long way to go in helping to optimize code and can’t actually do the optimization itself. However, it can still help to provide some useful insights and give some pointers to improving your programming. Here are the most common ways that I have used ChatGPT in optimizing my code for performance:

  • Code Suggestions
    Most simply, it can suggest code snippets or alternative solutions to improve the performance of your existing code.
  • Best Practices
    Having been trained on a wide range of code patterns, ChatGPT can help you follow best practices for coding and software design, leading to more efficient and optimized code.
  • Refactoring
    It helps to reorganize existing code to improve its efficiency and maintainability without affecting its functionality.
  • Knowledge Sharing
    There are many scenarios where your code can be implemented simply through a single import or with other programming languages, libraries, and frameworks. ChatGPT’s suggestions help ensure you are making informed decisions on the best implementations for your needs.

Of course, the bulk of these still requires you to optimize your code manually. However, using AI to gain insights and suggestions for this can be a great way to improve your productivity and produce higher-quality code.

AI Is Amazing, But It Does Have Its Limitations

Now that we have seen what AI can do for you and your programming productivity, I would imagine you are bubbling with ideas on how you are going to start implementing these in your programming workflows.

However, it is essential to keep in mind that these models are fairly new and still have a long way to go regarding reliability and accuracy. These are just some of the limitations that AI, specifically, ChatGPT, has:

  • Limited Understanding
    AI algorithms like ChatGPT have a limited understanding of code and may not fully understand the implications and trade-offs of certain programming decisions.
  • Training Data Limitations
    The quality and relevance of AI algorithms’ output depend on the quality and scope of the training data. For example, ChatGPT was only trained on data dating to 2021. Any updates in programming languages since then may not be reflected.
  • Bias
    AI algorithms can be biased towards specific patterns or solutions based on the data they were trained on, leading to suboptimal or incorrect code suggestions.
  • Lack of Context
    AI algorithms may struggle to understand the context and the desired outcome of a specific coding task, leading to generic or irrelevant advice. While this can be minimized with specific prompts, it is still difficult to generate solutions to more complicated problems.

Nevertheless, these limitations are a small price for the multitude of benefits that AI tools provide. While I am an advocate of using AI to boost your programming productivity, keeping in mind these limitations are crucial when using AI in your workflows as it is important to ensure that the information or code you are producing is reliable, especially if you are using it in a professional setting.

With the current limitations, AI should only be used as a means to assist your current skills, not to replace them. Hence, with that in mind, use it tactfully and sparingly to achieve a good balance in boosting your productivity but not detracting from your skills as a programmer.

How Else Will AI Improve Programmers’ Lives?

While I have mainly talked about the technical aspects of programming that AI can help in, there are many other areas where AI can help to make your life as a programmer much easier.

We are just at the tip of the iceberg in this incoming wave of AI. Many new use cases for AI appear every day with the potential to improve programmers’ lives even further. In the future, we are likely to see many new integrations of AI in many of our daily software uses as programmers.

There already exists general writing software, which could be useful for programmers in creating code and API documentation. These have been around for a while and have become widely accepted as a tool that helps, not replaces.

General productivity and notetaking tools that use AI have also been a big hit, especially for programming students who have to plow through large amounts of information every day. All in all, where there is a labor-intensive task that can be resolved, AI will likely be making headway in those areas.

Wrapping Up

To wrap things up, I will end with a reminder from the opening of this guide. I believe that there is massive potential in becoming well-versed with AI, not as a means to replace our work, but as a means to improve it.

With the right knowledge of what you can and, more importantly, cannot do, AI can be an extremely valuable skill to have in your programming arsenal and will undoubtedly save you copious amounts of time if used correctly.

Hence, rather than fearing the incoming wave of new AI technology, I encourage you to embrace it. Take the knowledge you have learned from the guide and tailor it to your own needs and uses. Every programmer’s workflows are different, but with the right principles and a good knowledge of the limitations of AI, the benefits are equally available to everyone.

So all that’s left for you to do is to reap the supersized benefits that come with integrating AI into your current workflows and see your programming productivity skyrocket as it has for me. And if there’s one thing to remember, it’s to use AI as your assistant, not your replacement.

]]>
hello@smashingmagazine.com (Shane Duggan)
<![CDATA[The Safest Way To Hide Your API Keys When Using React]]> https://smashingmagazine.com/2023/05/safest-way-hide-api-keys-react/ https://smashingmagazine.com/2023/05/safest-way-hide-api-keys-react/ Mon, 08 May 2023 13:00:00 GMT Back in the day, developers had to write all sorts of custom code to get different applications to communicate with each other. But, these days, Application Programming Interfaces (APIs) make it so much easier. APIs provide you with everything you need to interact with different applications smoothly and efficiently, most commonly where one application requests data from the other application.

While APIs offer numerous benefits, they also present a significant risk to your application security. That is why it is essential to learn about their vulnerabilities and how to protect them. In this article, we’ll delve into the wonderful world of API keys, discuss why you should protect your API keys, and look at the best ways to do so when using React.

What Are API Keys?

If you recently signed up for an API, you will get an API key. Think of API keys as secret passwords that prove to the provider that it is you or your app that’s attempting to access the API. While some APIs are free, others charge a cost for access, and because most API keys have zero expiration date, it is frightening not to be concerned about the safety of your keys.

Why Do API Keys Need To Be Protected?

Protecting your API keys is crucial for guaranteeing the security and integrity of your application. Here are some reasons why you ought to guard your API keys:

  • To prevent unauthorized API requests.
    If someone obtains your API key, they can use it to make unauthorized requests, which could have serious ramifications, especially if your API contains sensitive data.
  • Financial insecurity.
    Some APIs come with a financial cost. And if someone gains access to your API key and exceeds your budget requests, you may be stuck with a hefty bill which could cost you a ton and jeopardize your financial stability.
  • Data theft, manipulation, or deletion.
    If a malicious person obtains access to your API key, they may steal, manipulate, delete, or use your data for their purposes.
Best Practices For Hiding API Keys In A React Application

Now that you understand why API keys must be protected, let’s take a look at some methods for hiding API keys and how to integrate them into your React application.

Environment Variables

Environment variables (env) are used to store information about the environment in which a program is running. It enables you to hide sensitive data from your application code, such as API keys, tokens, passwords, and just any other data you’d like to keep hidden from the public.

One of the most popular env packages you can use in your React application to hide sensitive data is the dotenv package. To get started:

  1. Navigate to your react application directory and run the command below.
    npm install dotenv --save
    
  2. Outside of the src folder in your project root directory, create a new file called .env.

  3. In your .env file, add the API key and its corresponding value in the following format:
    // for CRA applications
    REACT_APP_API_KEY = A1234567890B0987654321C ------ correct
    
    // for Vite applications
    VITE_SOME_KEY = 12345GATGAT34562CDRSCEEG3T  ------ correct
    
  4. Save the .env file and avoid sharing it publicly or committing it to version control.
  5. You can now use the env object to access your environment variables in your React application.
    // for CRA applications
    'X-RapidAPI-Key':process.env.REACT_APP_API_KEY
    // for Vite  applications
    'X-RapidAPI-Key':import.meta.env.VITE_SOME_KEY
    
  6. Restart your application for the changes to take effect.

However, running your project on your local computer is only the beginning. At some point, you may need to upload your code to GitHub, which could potentially expose your .env file. So what to do then? You can consider using the .gitignore file to hide it.

The .gitignore File

The .gitignore file is a text file that instructs Git to ignore files that have not yet been added to the repository when it’s pushed to the repo. To do this, add the .env to the .gitignore file before moving forward to staging your commits and pushing your code to GitHub.

// .gitignore
# dependencies
/node_modules
/.pnp
.pnp.js

# api keys
.env

Keep in mind that at any time you decide to host your projects using any hosting platforms, like Vercel or Netlify, you are to provide your environment variables in your project settings and, soon after, redeploy your app to view the changes.

Back-end Proxy Server

While environment variables can be an excellent way to protect your API keys, remember that they can still be compromised. Your keys can still be stolen if an attacker inspects your bundled code in the browser. So, what then can you do? Use a back-end proxy server.

A back-end proxy server acts as an intermediary between your client application and your server application. Instead of directly accessing the API from the front end, the front end sends a request to the back-end proxy server; the proxy server then retrieves the API key and makes the request to the API. Once the response is received, it removes the API key before returning the response to the front end. This way, your API key will never appear in your front-end code, and no one will be able to steal your API key by inspecting your code. Great! Now let’s take a look at how we can go about this:

  1. Install necessary packages.
    To get started, you need to install some packages such as Express, CORS, Axios, and Nodemon. To do this, navigate to the directory containing your React project and execute the following command:
    npm install express cors axios nodemon
    
  2. Create a back-end server file.
    In your project root directory, outside your src folder, create a JavaScript file that will contain all of your requests to the API.

  3. Initialize dependencies and set up an endpoint.
    In your backend server file, initialize the installed dependencies and set up an endpoint that will make a GET request to the third-party API and return the response data on the listened port. Here is an example code snippet:
    // defining the server port
    const port = 5000
    
    // initializing installed dependencies
    const express = require('express')
    require('dotenv').config()
    const axios = require('axios')
    const app = express()
    const cors = require('cors')
    app.use(cors())
    
    // listening for port 5000
    app.listen(5000, ()=> console.log(Server is running on ${port} ))
    
    // API request
    app.get('/', (req,res)=>{
    const options = { method: 'GET', url: 'https://wft-geo-db.p.rapidapi.com/v1/geo/adminDivisions', headers: { 'X-RapidAPI-Key':process.env.REACT_APP_API_KEY, 'X-RapidAPI-Host': 'wft-geo-db.p.rapidapi.com' } }; axios.request(options).then(function (response) { res.json(response.data); }).catch(function (error) { console.error(error); }); }
  4. Add a script tag in your package.json file that will run the back-end proxy server.

  5. Kickstart the back-end server by running the command below and then, in this case, navigate to localhost:5000.
    npm run start:backend
    
  6. Make a request to the backend server (http://localhost:5000/) from the front end instead of directly to the API endpoint. Here’s an illustration:
    import axios from "axios";
    import {useState, useEffect} from "react"
    
    function App() {
    
      const [data, setData] = useState(null)
    
      useEffect(()=>{
        const options = {
          method: 'GET',
          url: "http://localhost:5000",
        }
        axios.request(options)
        .then(function (response) {
            setData(response.data.data)
        })
        .catch(function (error) {
            console.error(error);
        })
    }, []) console.log(data) return ( <main className="App"> <h1>How to Create a Backend Proxy Server for Your API Keys</h1> {data && data.map((result)=>( <section key ={result.id}> <h4>Name:{result.name}</h4> <p>Population:{result.population}</p> <p>Region:{result.region}</p> <p>Latitude:{result.latitude}</p> <p>Longitude:{result.longitude}</p> </section> ))} </main> ) } export default App;

Okay, there you have it! By following these steps, you'll be able to hide your API keys using a back-end proxy server in your React application.

Key Management Service

Even though environment variables and the back-end proxy server allow you to safely hide your API keys online, you are still not completely safe. You may have friends or foes around you who can access your computer and steal your API key. That is why data encryption is essential.

With a key management service provider, you can encrypt, use, and manage your API keys. There are tons of key management services that you can integrate into your React application, but to keep things simple, I will only mention a few:

  • AWS Secrets Manager
    The AWS Secrets Manager is a secret management service provided by Amazon Web Services. It enables you to store and retrieve secrets such as database credentials, API keys, and other sensitive information programmatically via API calls to the AWS Secret Manager service. There are a ton of resources that can get you started in no time.
  • Google Cloud Secret Manager
    The Google Cloud Secret Manager is a key management service provided and fully managed by the Google Cloud Platform. It is capable of storing, managing, and accessing sensitive data such as API keys, passwords, and certificates. The best part is that it seamlessly integrates with Google’s back-end-as-a-service features, making it an excellent choice for any developer looking for an easy solution.
  • Azure Key Vault
    The Azure Key Vault is a cloud-based service provided by Microsoft Azure that allows you to seamlessly store and manage a variety of secrets, including passwords, API keys, database connection strings, and other sensitive data that you don’t want to expose directly in your application code.

There are more key management services available, and you can choose to go with any of the ones mentioned above. But if you want to go with a service that wasn’t mentioned, that’s perfectly fine as well.

Tips For Ensuring Security For Your API Keys

You have everything you need to keep your API keys and data secure. So, if you have existing projects in which you have accidentally exposed your API keys, don’t worry; I've put together some handy tips to help you identify and fix flaws in your React application codebase:

  1. Review your existing codebase and identify any hardcoded API key that needs to be hidden.
  2. Use environment variables with .gitignore to securely store your API keys. This will help to prevent accidental exposure of your keys and enable easier management across different environments.
  3. To add an extra layer of security, consider using a back-end proxy server to protect your API keys, and, for advanced security needs, a key management tool would do the job.
Conclusion

Awesome! You can now protect your API keys in React like a pro and be confident that your application data is safe and secure. Whether you use environment variables, a back-end proxy server, or a key management tool, they will keep your API keys safe from prying eyes.

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Jessica Joseph)
<![CDATA[Learning JavaScript With Imagination]]> https://smashingmagazine.com/2023/05/learning-javascript-with-imagination/ https://smashingmagazine.com/2023/05/learning-javascript-with-imagination/ Fri, 05 May 2023 09:00:00 GMT Many years ago, I set my sights on becoming a senior developer. I achieved that goal! I wish I could say it was a matter of hard work and perseverance, but no, that wasn’t enough. JavaScript stood in my way, and it was while grappling with it that I came across one silly sentence in Marijn Habernecker’s classic book, Eloquent JavaScript. On the topic of variables, it says:

“You should imagine bindings as tentacles rather than boxes. They do not contain values; they grasp them — two bindings can refer to the same value.”

— Marijn Habernecker

An analogy like this falls into the category of childish comparisons meant mostly for raw beginners to understand basic concepts. They are fun and entertaining but not something that will turn you into a senior developer.

But it struck me. Even in a small way, this childish metaphor made me a better developer. It fixed a fundamental misunderstanding: we don’t put values in variables. If variables did behave like buckets or containers, how could this be true?

const count1 = 10;
const count2 = count1;

How is it possible to have the same 10 value in two different buckets? The same thing can’t be in two places at the same time, right?!

But the octopus metaphor solved this dilemma for me. Two tentacles can certainly grab the same value! It’s a visualization that reveals the underlying nature of values! Give me all of the world’s computer science literature on variables at my fingertips, and it would not be as useful to me as this imaginary octopus.

Why can’t all of JavaScript be like this?

My Search For More Visual Learning Material

I noticed a frustrating gap between doing the simple loop and function exercises you find in most beginner courses and actually building programs.

The problem? I still wasn’t at the level where I could decipher reams of dry technical text. I wanted more octopuses!

So, I looked everywhere for them. I scanned the internet for visual and abstract learning resources: Google, YouTube, Medium, TikTok, and every book I could possibly find. I discovered most “visual learning” material fits into one of three groups.

First is a gamified learning experience, like the learn-while-playing platforms CodeCombat and ElevatorSaga. While wonderfully creative and full of stories and characters that help you build real programs with real-world patterns, this type of learning is procedural. The focus is on using concepts rather than diving into what they are.

The second group is the graphically represented syntax or technical explanation. Ever seen an array of apple and orange emojis? Or diagrams of the event loop? These can break down scary concepts into an easier-to-digest visual medium. They can be powerful visual aids that transform dense technical jargon. Examples include Lydia Hallie’s brilliant “JavaScript Visualized” series as well as cheat sheets like this one from Ram Maheshwari.

The third group is closer to what I sought: the analogy-driven learning experience. Developers love a good analogy. We use them all the time in blog posts and video tutorials. They help explain very technical concepts. One resource I found, CodeAnalogies, is particularly impressive, with analogies for everything from content distribution networks to MVC frameworks.

But analogy-driven learning has limitations for me. All of the analogies were disconnected! They had no relation to one another. They were great for wrapping my head around an isolated topic but not for seeing the big picture. The thing with JavaScript is that everything is connected. How can a newspaper analogy for objects be extended to describe prototypal inheritance?

Finally, I came to realize the thing I wanted most was something memorable. I wanted to consolidate everything I was learning into a visual format that was easy to recall when I needed it — whether in an interview or while writing code. Unfortunately, most analogies are entirely forgettable. How many dog, cat, and banana arrays can one take?

Building My Own Visual Representations

There was only one solution to this: create my own visual representations for my JavaScript knowledge tree. But first, I needed to figure out how to make something stick in my memory.

I’ve always had an interest in mnemonic memory methods. These are memory “hacks,” such as the “memory palace”. They help visually encode large amounts of information for easier recall. World memory competitors use it to remember the order of multiple decks of cards and random number sequences.

The basic principle is this: You can take any idea and turn it into an image. For example, an array could be an ocean stingray. That’s good, but still not enough. The trick is to make the mental image as weird, funny, and ridiculous as possible. Images that are out of the ordinary are what stick to memory.

My First Big Lesson

So, here is one of my very first mnemonic representations of JavaScript arrays:

I was so proud of this. We have a stingray street vendor selling fruit, reminding me that arrays hold data. He has a special square device for picking up individual items that represent the square bracket syntax for selecting items. He has a monocle to remind me that arrays have methods for searching. He has a cowboy lasso that refers to loops, and so on.

It’s a fun depiction. But I was trying to learn JavaScript to land a job! If this ridiculous monocled stingray didn’t make me a better developer, it defeated the purpose. The ultimate test: would I use this image of an array streetseller as a point of reference while coding? Nope. It turned out to be entirely, utterly useless.

I didn’t need a way to remember the term array. Knowing they have methods to search does not tell me how I can perform searches. Even a Swiss Army knife tail with all the main array methods like .sort(), .push(), and .unshift() proved pointless with a two-second Google search.

This was trickier than I thought. I learned my first big lesson:

We can’t learn a programming language using pure mnemonic methods because memorizing the lists of things does not help you understand the underlying concepts.

My Second Big Lesson

After much reflection and many, many more failed attempts, I tried to represent something I had always struggled with: functions. What would the makeup of a function look like? I came up with this pretty poor representation:

At the top, we have silly paratroopers that represent parameters. We send parameters through the entrance, ( ), and they end up in a contained pool (i.e., the function body). They start arguing, and that’s how we can remember arguments.

Unfortunately, this went into the failure bucket, too. Representing syntax isn’t helpful. It’s better to gain familiarity through lots of practice writing it. There were also dangerous flaws in the analogy. It suggested parameters and arguments are the same; the only difference is their place.

An abstraction like this with flawed theory baked into it would actually make me a worse developer! I needed to dig deeper into what things really are.

The Breakthrough

The solution was to go atomic. Focusing on the smallest concept would give me the first layer I could use to paint a picture of JavaScript. I circled back to our friendly octopus, where this all began.

What were the tentacles grasping? Values!

Values are often glossed over and don’t seem like the key to unlocking the mysteries of JavaScript. But this simple idea was a breakthrough for me: if the code flows, we can imagine the flowing as an ocean or river. What do we find in this flow? Islands!

Values are the islands, and each island has a set location and size. That was it! It was exactly what I was looking for. It still didn’t improve my skills as a developer. But I knew it had potential.

From Analogies To Models

Layer upon layer, I began to build up a vision of JavaScript by focusing on what things are and how they are related to other things. First came islands. Then came genies, evil sorcerers, flying ships, and turtle pilots. My excitement grew as the smaller layers, the fundamentals, could be combined to produce a big-picture understanding of more complex topics like closures.

Each image was not a simple analogy. It was a mental model — a way of thinking. It provided a lens to interpret each concept but required imagination. I call them imagimodels.

I knew from my past attempts that focusing on lists is ineffective. The secret lies in what things are rather than what they do. When I think about it, that was my problem with learning JavaScript all along. I had no issue eventually getting something to work. The problem was I mostly didn’t have the faintest idea why.

Growing A Memorable, Multi-Layered Universe

With this approach, an abstracted world of JavaScript was erected:

This image helps me identify fundamental JavaScript concepts for assignment, local scope, and primitive values. Of course, it is not a standalone image, and you won’t be able to identify these concepts without context. It requires a story to weave JavaScript and analogy together.

A narrative is what drove the other concepts into the picture, quite literally. Here, we can see a closure:

The next image uses mnemonic methods for remembering useful terminology. Unlike arrays and parameters, “execution context” felt like something I would read in an IBM manual from the 1970s. It was a scary term that deserved a scary representation.

I found a connection between “execution context” and the phrase “executor got hex,” which inspired an illustration of a medieval-style executioner holding an axe in one hand and a hex, or sorcerer’s spell, in the other.

Why the spell? It’s not random. It builds on previous existing layers representing other sub-concepts. The spell is tied to our understanding of a function invocation and makes you think of wizards and sorcerers, right? And it is the parenthesis in every function call.

This begs the question, what is a function? Is it the result of the hex or the execution? Is it the invocation itself? Only a firm understanding of objects would help me identify what really is going on here.

As you can see, the visual layers build upon one another, like a tree with a central trunk branching out in different directions. It’s the knowledge tree I wanted — not one containing every detail about JavaScript, but a central, unified mnemonic resource that could be compared and evaluated against, added to, argued over, and above all, retrieved when I needed it.

I call my illustrations of JavaScript concepts “The Great Sync”, and I use them to continue to grow my understanding. It is also something I now use to teach others.

A Celebration Of Alternative Learning Approaches

Was it all sunshine and daisies for my JavaScript learning journey from this point on? Did building this world make me ace every JavaScript quiz I needed to pass to get that senior-level job?

Nope! I wish the best of luck to anyone who thinks they can learn JavaScript simply by looking at a few pictures.

My biggest takeaway from all my learning efforts is this: Despite The Great Sync solving so many of my personal struggles with the language, is it any better than any single one of the resources I mentioned? Does it have any use without actual coding — the tireless, painful process of trying to get your code to work? Certainly not.

It is one tool, among many, to “see” JavaScript differently and escape the confinement of a code editor or YouTube tutorial.

All of these approaches celebrate the variety and infinite creative potential of the learning experience. And the more we have, the more learners who are stuck on JavaScript can discover new routes to understanding.

Further Reading on Smashing Magazine

]]>
hello@smashingmagazine.com (Kylo Robinson)
<![CDATA[The Growing Need For Effective Password Management]]> https://smashingmagazine.com/2023/05/effective-password-management/ https://smashingmagazine.com/2023/05/effective-password-management/ Thu, 04 May 2023 12:00:00 GMT This article is a sponsored by Passwork

As businesses rely more on digital services and platforms, the number of passwords and access credentials employees need to remember has grown exponentially. This can lead to the use of weak or duplicated passwords, posing a significant security risk. A centralized and secure password management system is essential for mitigating these risks and ensuring that sensitive information remains protected.

Self-Hosted vs. Cloud-Based Password Management Solutions

When it comes to password management solutions, businesses have two primary options: self-hosted and cloud-based. While both have their merits, self-hosted solutions often provide a higher level of control and customization.

Advantages Of Self-Hosted Solutions

  • Greater control
    A self-hosted solution allows administrators to have complete control over the password management infrastructure, enabling them to customize it according to their company’s needs;
  • Enhanced security
    By hosting the password management system on the company’s own servers, businesses can ensure that their sensitive data remains within their control, reducing the risks associated with third-party providers;
  • Compliance
    Self-hosted solutions make it easier for companies to meet industry-specific compliance requirements and data protection regulations.

Limitations Of Cloud-Based Solutions

  • Dependency on third-party providers
    With cloud-based solutions, businesses rely on external providers for the security and availability of their data. This can lead to potential vulnerabilities and the risk of data breaches;
  • Limited customisation
    Cloud-based solutions often have predefined features and settings, which may not align with a company’s unique requirements.
Collaborative Password Management In Companies

In a company setting, employees often need to share passwords and access credentials for various applications and services. A collaborative password management system enables the secure sharing of these credentials, improving productivity and security.

Secure Sharing

Collaborative password management systems, like Passwork, provide secure sharing options, allowing employees to share access credentials with colleagues without exposing sensitive data to unauthorized users. This is the kind of feature that a company needs for frictionless sharing in a collaborative environment, but without exposing sensitive information as you might through another platform, like email. This way, sharing happens securely through the password app’s service.

Permission Management

To maintain control over who can access and modify shared passwords, a collaborative password management system should offer granular permission management. Administrators can assign different levels of access to individual users or groups, ensuring that employees have access to the information they need without compromising security.

Another benefit of permission management is that it provides you with an easy path to knowing who has access to certain information, as well as an easy way to assign and revoke permissions on an individual and group level.

Version Control

Have you ever created a new password for a service, then needed to reference the past password? There’s nothing worse than losing a password when you need it in a pinch, and in an environment where multiple users can update and modify shared passwords, version control becomes essential. Collaborative password management systems should provide a history of changes made to shared credentials, enabling administrators to track modifications and revert to previous versions if needed.

Access Rights Segregation

To ensure that sensitive data remains protected, companies should implement access rights segregation within their password management system. This involves dividing users into different groups based on their roles and responsibilities and assigning appropriate access permissions accordingly.

Role-Based Access Control (RBAC)

RBAC is a widely used method for implementing access rights segregation. With RBAC, administrators can create roles that represent different job functions within the company and assign appropriate permissions to each role. Users are then assigned to roles, ensuring that they only have access to the information they need to perform their tasks.

Attribute-Based Access Control (ABAC)

ABAC is a more flexible approach to access control, where permissions are granted based on a user’s attributes (e.g., job title, department, location, and so on) rather than predefined roles. This allows for greater customization and scalability, as administrators can create complex access rules that adapt to changing business requirements.

Auditing And Monitoring Activity

To maintain a secure password management system, administrators must be able to monitor and audit user activity. This sort of transparency allows you to know exactly who changed something at a particular point in time so you can take corrective action. This includes tracking changes to passwords, monitoring access attempts, and identifying potential security threats.

Activity Logging

A comprehensive password management system should log all user activity, including access attempts, password modifications, and sharing events. This information can be invaluable for detecting unauthorized access, troubleshooting issues, and conducting security audits.

For example, it’s nice to have a way to see who has used a particular password and when they used it, especially for troubleshooting permissions.

Real-Time Notifications

In addition to logging activity, real-time alerts can help administrators quickly identify and respond to potential security threats. A password management system that provides real-time notifications for suspicious activity, such as multiple failed login attempts or unauthorized password changes, can be instrumental in preventing data breaches.

Reporting

Generating reports on user activity, password strength, and compliance can help administrators assess the overall health of their password management system and identify areas for improvement. Regularly reviewing these reports can also ensure that the company remains compliant with relevant industry regulations and best practices.

Best Practices For Implementing A Password Management System

To ensure the success of a password management system, it’s crucial to follow best practices for implementation and ongoing maintenance. You want to ensure that your passwords are managed in a way that is safe for everyone in your company while adhering to compliance guidelines for a secure environment.

First, Choose The Right Solution

Selecting the right password management system for your company is essential. Consider factors such as the size of your organization, the level of customization required, and your preferred hosting option (self-hosted vs. cloud-based) when evaluating solutions. Passwork, for example, offers a self-hosted solution with robust collaboration features, making it a suitable option for businesses looking for greater control and customization.

Next, Train Employees

Employee training is crucial for the successful adoption of a password management system. Ensure that all users understand how to use the system, the importance of password security, and company policies related to password management.

Regularly Review And Update Policies

As your business evolves, your password management policies should adapt accordingly. Regularly review and update your policies to ensure that they continue to meet your organization’s needs and maintain compliance with industry regulations.

Monitor And Audit System Activity

Stay vigilant by regularly monitoring and auditing your password management system. This will help you identify potential security threats and ensure that your system remains secure and up-to-date.

Password Policy Best Practices

To maintain a secure password management system, it’s essential to establish strong password policies and ensure that employees follow best practices.

Password Length And Complexity

A strong password policy should require a minimum password length and a combination of characters, including upper and lower case letters, numbers, and special characters. This helps to increase password entropy, making it more difficult for attackers to guess or crack passwords using brute force methods.

Password Expiration And Rotation

Regularly changing passwords can help to minimise the risk of unauthorized access, especially in cases where passwords have been compromised without the knowledge of the organization. Implementing a password expiration policy that requires users to change their passwords at regular intervals can enhance security.

Two-Factor Authentication (2FA)

In addition to strong password policies, implementing two-factor authentication can provide an additional layer of security. By requiring users to provide a second form of verification, such as a code sent to a mobile device, 2FA reduces the risk of unauthorized access even if a password is compromised.

Prevent Reused Passwords

Employees should be discouraged from using the same password across multiple accounts and services. Encourage the use of unique passwords for each account to minimise the risk of unauthorized access in case one password is compromised.

Integrations And Compatibility

An effective password management system should be compatible with various platforms, applications, and services that your company uses. This ensures seamless integration and streamlined access management.

Single Sign-On (SSO)

SSO enables users to access multiple applications and services with a single set of credentials.

By integrating your password management system with SSO, you can simplify the login process for employees, reducing the need for multiple passwords and improving security.

Browser Extensions and Mobile Apps

A password management system that offers browser extensions and mobile apps can help ensure that employees have access to their passwords and credentials wherever they are. This enhances productivity and encourages the adoption of the password management system.

Custom Integrations

Depending on your company’s requirements, you may need to integrate your password management system with other tools, such as ticketing systems, customer relationship management platforms, or identity and access management solutions. Ensure that the password management system you choose is flexible and allows for custom integrations. Make sure that the password management system you decide to use has the flexibility to connect with the other services you rely on for your business.

Backup And Disaster Recovery

A robust password management system should include backup and disaster recovery features to ensure the availability and integrity of your organization’s passwords and credentials.

Regular Backups

Implement a backup policy that includes regular backups of your password management system’s data. This helps to protect against data loss due to hardware failures, accidental deletions, or other unforeseen issues.

Encrypted Backups

Backups should be encrypted to protect the sensitive data they contain. Ensure that your password management system supports encrypted backups and uses strong encryption algorithms to safeguard your data.

Disaster Recovery Plan

Develop a disaster recovery plan that outlines the steps to be taken in case of a system failure, data breach, or other security incidents. This plan should include procedures for restoring data from backups, as well as measures to prevent further damage or unauthorized access.

Evaluating And Selecting A Password Management Solution

When choosing a password management system, it’s important to thoroughly evaluate potential solutions and select the one that best meets your organization’s needs.

Security Features

Assess the security features offered by each solution, such as encryption algorithms, two-factor authentication support, and activity monitoring capabilities. Ensure that the solution adheres to industry standards and best practices for data security.

Scalability

Consider the scalability of the password management system, especially if your organization is growing or has plans for expansion. The solution should be able to handle an increasing number of users and passwords without compromising performance or security.

Ease of Use

User adoption is crucial for the success of a password management system. Evaluate the user interface and overall ease of use of each solution, as this can have a significant impact on employee adoption and satisfaction.

Cost

Consider the total cost of ownership for each password management system, including initial implementation costs, ongoing maintenance, and any additional fees for upgrades or add-on features. Be sure to weigh these costs against the potential benefits and savings offered by a more secure and efficient password management process.

Ongoing Maintenance And Support

Once your password management system is in place, it’s essential to keep it up-to-date and ensure that users receive the necessary support.

Software Updates

Regularly update your password management system to benefit from the latest security patches, feature enhancements, and bug fixes. This helps to maintain the stability and security of the system.

User Support

Provide user support for your password management system, including training materials, FAQs, and access to technical assistance when needed. This ensures that employees can effectively use the system and resolve any issues that may arise.

Periodic Security Assessments

Conduct periodic security assessments of your password management system to identify any potential vulnerabilities and ensure that it continues to meet your organization’s security requirements. This may include penetration testing, vulnerability scanning, and other security assessments.

Conclusion

Organizing password management in a company is a critical task for system administrators. By selecting the right solution, implementing access rights segregation, fostering collaboration, and actively monitoring and auditing the system, administrators can create a secure and efficient password management environment. Additionally, establishing strong password policies, ensuring like Passwork, can offer businesses greater control and customization, providing a solid foundation for effective password management.

By following the best practices outlined in this guide, system administrators can enhance their organization’s overall security posture while improving productivity and streamlining access management.

]]>
hello@smashingmagazine.com (Iliya Garakh)
<![CDATA[Smashing Podcast Episode 60 With Mei Zhang: How Do You Design A Design Process?]]> https://smashingmagazine.com/2023/05/smashing-podcast-episode-60/ https://smashingmagazine.com/2023/05/smashing-podcast-episode-60/ Tue, 02 May 2023 12:00:00 GMT We’re talking about the process of design. How do you build a process to enable your best work? Vitaly Friedman talks to designer Mei Zhang to find out.

Show Notes

Weekly Update

Transcript

Vitaly Friedman: She’s a senior UX designer and a UX consultant with a strong product and strategy background. As a kid, she was busy creating arts and fell in love with UX while studying industrial design in college. She has spent her career developing design systems and solving problems for e-commerce products that are loved by millions of people around the world. Now, she also loves helping designers uncover root causes, explore multiple directions, and identify sweet spots between user and business.

Vitaly: She’s currently working with Booking.com and resides in Amsterdam, Netherlands. Of course, she is a cat person, as it often is in the Smashing Podcast. And in her spare time, she can be found painting, skiing, serving her cats — there are a couple — writing on her design blog and learning about design, business, leadership and management. We know she’s a wonderful UX designer, but did you know that she used to swim in order to participate in the Olympics? That was one of her dreams, which unfortunately didn’t come true. However, help her have a lung capacity of over 5,000, which is a big deal. My Smashing friends, please welcome Mei Zhang. Hello, Mei. How are you feeling today?

Mei Zhang: Hello. Hi, everyone. I’m smashing.

Vitaly: Oh, that’s wonderful to hear. How are you? Is it cold out there in Amsterdam these days or is it sunny?

Mei: Luckily, it was sunny in the couple of days. In the past couple of days.

Vitaly: So, it’s better. I have to ask this story. Swimming in the Olympics. Why did you decide to do this? Because I guess you were playing with design and UX already at this point. Or was it before or prior to design?

Mei: Oh.

Vitaly: Why did you decide to take on this challenge?

Mei: It was definitely before the design career. I was in my elementary school and I fall in love with swimming. And as a ambitious little girl who want to have some targets. So I need to compete for the Olympics because this is something very challenging. But unfortunately, I didn’t go through the competition. But I think it definitely gave me something, make me a stronger person. Not only physically, but also mentally. So I really appreciated that.

Vitaly: I have no doubt at all. We’ll probably bring up — I’ll probably bring up this question about how it in the end influenced your UX and design career. But maybe before we dive into that. And maybe you could share a story about how did you even end up in this design and UX world? Maybe you could share a bit about your journey and what brought you where you are today.

Mei: I think what brought me where I am today is the iPhone 4. I got iPhone 4 as a gift at the first year of my college and then I get to learn about human-computer interaction which published by Apple. And another fun fact, the human-computer interaction guidelines are already there in 1987. That is what I remember. Whoa, it’s a long history of something that I have never heard about. I start studying basically X design by myself. I just genuinely really interested in the fancy interactions at that time. What CSS can do for you.

Mei: I was also a Smashing Magazine fan. I follow all your articles and try to do something with CSS and JavaScript. And I think also during my study, people start discussing about what you want as a career after graduation, what industry you would like to join. I was lost at that time, but I know I love UX design and I’m good at it because all my school project was related somehow to human-computer interaction. And, I think, at that time, the IT industry also was booming because people started having Facebook. I think that somehow made me feel like maybe that is something that has a future. So, that is basically my journey into UX design.

Vitaly: But then, you ended up where you are here today. And you have all this. I always reminded of all this UX methodologists and methods and all the ways. And you have created these incredible mind maps as well. But all the things that you potentially need to keep in mind as a UX designer when you are working on a product or on a project. And maybe before we dive there, maybe we could speak a bit more particularly about breaking complexity into something that’s more manageable.

Vitaly: I know that you’ve been working or you are working on relatively or quite complex products. And again, just given this huge amount of all the different methods and options available to you as a UX designer, how do you choose your path? Or specifically, maybe, how do you start when you have a really complex. Maybe an enterprise product or maybe B2B or maybe anything that’s complicated and you need to break it down. How do you do that? What would be your process? And maybe also, your methods to make sense of it all?

Mei: Such a great question. I would guess the first step is always find what is the real problem. What we are designing for. To deep dive into the problems and find the root cause. That is definitely the first step I would choose because the problems also help the designers or people around you to define the process because with different problems you might need different methodologies. And also, the second step will also be identifying the stakeholders. As you mentioned, you have people around you who are genuinely interested or who are in charge of the project. Identify the people around you and what they need.

Mei: The outcome is not only the end product that deliver to the users, but also to. Let’s say it in the simple way. Make your stakeholders happy. I think those are the two basic principles for navigating through what methodologies that I pick. And also, you need to look at availabilities as well. That is, usually happens in the real life work. Maybe for example, you don’t have data for some project. But also, it’s impossible to collect that. Maybe you need to find another method that could answer the same questions that is available.

Vitaly: But then I’m also wondering: you also mentioned data. I’m actually quite wondering because I feel like very often, I end up in this dilemma with teams I’m working with. Where there is a person or there is a team, they have a very strong design vision. This is how it should be. It’s usually based on research and usually going to be very much focused on user needs or customer needs. A very customer-centric view. But then sometimes, it clashes against the business idea of how things should be and the business direction of where the company wants to go. And sometimes, I feel that there is this really strong tension between where the designer wants to go and what the, let’s say, A/B testing tells.

Vitaly: And maybe, testing is such a short-term thing. Where you test if it works now and then. It might be a good thing, of course, to improve things and that will drive conversion, though. But where do you see? How do you see this resolving? How do you get to this balance between doing something? Because again, we run A/B tests and this performs better than this. Against the big design, the grandiose, so to say, design vision that exists in designers’ heads based on user needs and based on business needs.

Mei: First of all, I don’t think those two A/B tests. Let’s say A/B testing and a great vision in the designer’s head is something that cannot exist together. I think they can co-exist because A/B testing is just one of methodologies to validate the concept. It’s the small steps to take you towards a big vision. It’s not a easy task, but it’s the designers who need to guide the product managers or guide your team towards the vision. That is actually sometimes underestimated by the outside because we have a lot of things showed to us designers because we are visionaries.

Mei: We have a vision, so we need to take that through. What I usually do is first, definitely have a great relationship with your product managers because you are actually working together as a whole to reach the vision. They are more business of course, and they are more data-driven or metrics-driven. But on the other hand, you are the user advocate. Build a good relationship and trust with your product managers and work together on a daily basis. It shouldn’t be like, "Ah, I don’t agree with you". Or something like this. But more be like, "Let’s sit together and make a great thing or make a great product."

Mei: And I think sometimes, I also feel like it’s really important to have a businessman side as a designer. Especially if you are working for an organization that’s aimed for profit, your responsibility is also to keep the business running. The business goal is also your goal as a designer, as well. Your responsibility is to craft a great user experience that will improve the business or make the business stronger. For example, learn about business metrics, understand the view from the product side. And also, sometimes I find what is helpful for me is to define user behavior metrics because for A/B testing.

Mei: Sometimes you, say that, maybe some business metrics doesn’t increase but the user behavior metrics were improving. You can also use this as a argument to get things through. It’s not only about A/B testing. It has to be improving business. But if you can prove that it’s going to improve the user experience and the user experience can lead into long-term business growth, then that will happen. And also, I think what I’m doing very often in the past is also to break the vision into smaller pieces that is experimentable.

Mei: In this case, it’s also help as a designer to validate your ideas. I know we are all, as a designer, we’re all proud of our ideas and we believe that’s going to work. And most of the time, of course it’s going to work, but we also need to use data and argument to support our ideas. I would say it is something. It definitely bring a lot of positive side from A/B testing to build a vision.

Vitaly: The reason why I brought this up actually because I’m just coming from a project where this has become a big issue. Where essentially, it seems like there is this very strong tension between, again, the ideas of we need to do something now and drive conversion up now. But again, we also need to think about the long-term goals. And very often, what happens is you might be improving things by showing a new set of popup very prominent and then a bit more prominent, then a bit more flashy and then even more flashy. But then it’s actually going to hurt your long-term goals. I actually want to maybe dig a little bit deeper. When you speak about user behavior metrics or any ways to capture the quality of the design work basically done. Could you maybe share a few of them that would be most important in your work?

Mei: I’m thinking about something related to the example you just gave about the flashing popup. One example I can think of right now is that, in the past, I also had experience where the product was pushing for metrics. They’re making things rainbowy or flashy. I think definitely what helped was to conduct user interviews to understand what is user’s point of view of that. They’ll be like, "Oh, I think this brand was just to trick me." They also understand the black UX part or the bad. Sorry. The bad UX pattern that try to trick them into something.

Mei: And also, something help me as well is to look into the long-term user flow because they tend to only focus on one metrics and improve that. But have you looked through the whole flow? Maybe the click rate went up, but in the end, less people are converting. Then you cannot say that this is a good solution. You just. Try to find different metrics that can, to build your argument with the product. And also, try to, in your daily basis, try to make your product manager or your product colleagues to more understand what is a good user experience.

Mei: Because I work with all kinds of product managers and some are like you mentioned in that case. Really focusing on one metrics and don’t care the UI. And there are also product managers who really understand what is UX experience. I want to do something good for the long run. Try to also influence your product managers to understand what is good for the long run. Because in the end, someone has to clean up the bad UX in the end because that will lead into something in the future.

Vitaly: Absolutely. I think it also heavily depends on the culture that the company has, the organization has and how the teams are organized. And sometimes, you see that there are. Whenever everything is siloed, you will end up in the situation where a silo would have very specific goals and they don’t even know what the other teams are doing. Or how their things that they may be performing or they’re working on in the vertical effect everyone else. This is more probably a slightly broader question in there, as well.

Vitaly: Maybe you could also share a bit of insight about some of the really complex challenges that you are facing at this moment. And something that you’re working on that, I would say, keep you awake at night. Hopefully not, but maybe there are some things. Just get sense about what you’re working on as well at the moment.

Mei: I couldn’t share details of product strategy with you inside.

Vitaly: Sure.

Mei: Because of the NDA stuff with my current employer, but I will say, the current challenge definitely about how to level up your people skills and communications as a designer through your career. Because I’m running a very big project right now. Basically, more than 30 stakeholders on the play. I really need to learn connecting people. How I can connect with people first by establishing yourself with your activities in your field. And also, to connect people and find the right person for the right question.

Mei: And also, at this point, you need to try to work through other people. I don’t know how to put it in the beautiful way, but more enable others to contribute to the project. In this sense, you need to really articulate the project and the impact of this project. So you can onboard people and to create a win-win situation where they can learn something from the project or they can do product improvement in their services, project as well — so if they would like to be onboarded and work with you.

Mei: Think that was about communication, connecting the people. But the most challenging part is leading the whole project. You need to be super organized, which I was not that great before. You need to have a roadmap of this project and keep updating this every day. So you can visualize what is going on. What are the updates, and also identify the key stakeholder for each phase of the project, of the activities. And how to communicate with them. And you need to visualize them, document them to help you organize the whole project. I guess that was the most challenging part for me.

Vitaly: That doesn’t sound like a lot of moving pixels around in Figma, though.

Mei: Which, I actually missed that part as well. I’m not sure if this is a common case, but I guess so. When you are running a big project where we are not in the phase of creating new ideas and Figma files. It’s more communicating, documenting, pitching or about the project.

Vitaly: This is just a normal state of things, I guess, all the time. Guess I become this person who would move away from, well, sketch at the time and Figma to spreadsheets. I don’t know. Much of my life these days is basically organizing things and also documents in Dropbox Paper or Google Doc. Just organizing things in a way that’s available, accessible to everyone else. It also goes, for example, for organizing meetings. I actually decided to take a design approach to design the best meetings experience. And this is really difficult, I think.

Vitaly: In general, processes which involve people be hard, of course. I’m also just curious about your take on the process because I know that you. Meetings including, for example. Because I know that you often say that you need to design your design process. And this is, very much plays. It’s a melody, beautiful melody to my ears because this is what I’ve been doing to some degree, I guess, for the last couple of years. I’m wondering though, how do you mean that? We’re designing the process. We need to figure out the right way of working for us, for the team, as well.

Vitaly: How do we design meetings? How do we? Do we do stand-ups? Do we do written stand-ups? When do we do retros? How often do we do this and that? Maybe you could share a few things that tend to work better for you that you learned working well. And something that you definitely advise as a consultant, as well, companies do really stay away from when it comes to design process.

Mei: I can quickly tell what companies should stay away for, in terms of a design process.

Vitaly: Sure.

Mei: Is to, for the sake of having a design process, to have a design process. Regardless of what problem you are trying to solve. I still remember in my career there was a company who really want to have a persona. I’m like, "Why we are going to create the personas?" They were like, "Oh, because everybody’s having a personas for this project and it’s a key important deliverable for understanding our customers. So we need this persona." So I’m like, "But do you have any?" I trying to explain persona is more you need to conduct interviews.

Mei: You need to gather datas and then you come up with someone that represents the key problems or key pain point of your customers. It’s not like you just create a persona out of a workshop with some people, internal colleagues of your company. So they’re like, "Oh, okay. Then we need to gather data or we need to have a lot of insight of the persona." But we couldn’t because they don’t have infrastructure to try user behavior. So I’m like, "No worries, just interview eight customers. It’s a good number. And try to find what are the common pain point or what’s a common desire or need they have? And then you have a persona."

Mei: That is something I learned through my career. Oh, you shouldn’t just say, "Oh, this thing looks fancy, the personas or something else. Oh, customer journey map, we need that." It’s not what you’re trying to understand and what do you have. And based on those two aspect, to try to find a methodology that really serve your needs or can help you move forward. This is definitely not advised for people or company. I think what I definitely enjoyed is to design, as you mentioned, design your own design process. Because when I was studying UX design, we have this design thinking process and everyone tried to follow.

Mei: Define a problem and try to understand and create something, iterate. I was also one of them trying, really into that. But then, when I start working I found, this is not always the case. You need to find what is the most important phase of the project. For example, if you are tackling a very complex problem and you don’t even understand what exact problem it is, then you need to spend a lot of effort in defining the problem phase. Or if it is a project really focused on deliverables, we need to shape a marketing video or we need to shape the design within two weeks. Then, maybe you need to spend more energy in the executing phase of the design.

Mei: While we are working, it’s very hard to have everything. To have a very complete design process where you have a solid deliverables for every phases. But you need to figure out which phase is the most important based on the needs and the problem and try to shift your energy there. But that doesn’t mean that you should skip some process. You can still have them, but it’s more trying to say what you have already have and not create new words on there. I think that’s what I learned from design your own design process.

Vitaly: That’s fine. You also, I always keep coming back to this. I don’t even know why. But I always feel that many of the colleagues I’m speaking to, they’re always just don’t even know how to navigate that space of UX methods and models and process. And sometimes, it feels like there is this huge amount of all these different things that very different companies are doing. And they’re inventing for themselves or using some of their other established, already established methods. Luckily, and fortunately for all of us, you have created two mind maps. Which I found really useful to be able to navigate the space in a bit more predictable way. Maybe you could tell a bit more about this and how it helps you in your work.

Mei: A very good question. At the beginning, I was just writing them down for myself. It’s more like library where what is available there and you can grab them as a building block to build up your own design process. But it’s not like something can mapped out the how of those design process and those methodologies and what it can bring. What I’m trying to say is to be flexible about your design process. To not just see the articles and I need this and this in exploration phase. But maybe you don’t need it based on your problem or what you are trying to design. Try to be flexible.

Mei: And also, I will say sometimes it’s more of the experience you get. When you are first time. For example, if you are conducting a user interview at first time or maybe you are doing a survey first time. It’s more you start learning how this methodology work and how you can improve based on the methodology. But then, as you try multiple methodologies in your career, you can reflect on. Well, this can help and what do I need to conduct this methodology? And then if you keep reflecting on them, it will help you in the future to decide, do I need this methodology in my design process? Will this fit the timeline? Will this fit the requirements? Will this be the best methodology to answer the business questions?

Mei: Then you start reflecting and then you can say, "Then, I don’t need this. Oh, I really need this methodology." It’s more, if you haven’t had a lot of experience, try to try them out. Even if you are not working or you are just doing an internship. But try things out to understand how those methodology work. And then, later on, you can. You get a next experience, then you can decide when to use what. So that would be my take.

Vitaly: That’s interesting because I think that to many of us, it’s... I don’t know... Many companies have the process. This is the process that they’re following through. It doesn’t matter what department. Doesn’t matter what their designers are working on. There is the process. This is how we work here kind of thing. And what I’m hearing from you is that basically you might need to be adaptive there. So if you are, say, switching from one design team that you’re working with or another team that maybe have different experience.

Vitaly: Maybe have different preferences. Maybe most of them are working remote. Maybe most of them are hybrid in one way or the other. So adjusting the methodology and the process based on the team that you have. The only thing that’s required there to get it right and to do it well is to know and be comfortable with the different techniques and different methods that are out there. Does that make sense? Is that pretty much what you do?

Mei: Yes, thanks. Yeah, definitely. That is a very great summary of what I just said.

Vitaly: But I think it is also very interesting because it can be quite challenging. Do you find yourself sometimes maybe stuck because you have a particular way of approaching a particular problem with the design team? But then you might have very different levels of experience on the team? You might feel like we need to do something because we might not be able to get things done in time. Or we are not moving along fast enough and I need to switch gears and move something to another methods.

Vitaly: The reason why I’m asking or what I’m asking here is that not only do we need to be able to switch and be adaptive moving from one team to another in your process. But also, as the process is in place, do you feel like sometimes you need to shift gears and change things and plug in something else because what you have is not working?

Mei: Yeah, definitely. I think a very great question. This is a daily life of designer, I guess.

Vitaly: The sad life of a designer, isn’t it?

Mei: Yeah, the sad. We have a dream design process defined before project or before we start working on something and maybe one month later something changed. Then you need to be flexible and adapt to it. We decided to collect user data because the PM was super into quantitative data and we need that. But our source was not available at that time. So we need to really think about, what can we do? Because we are not going to run the survey anymore as a design team or. What I did, I think it’s a really good step. I was also not super experienced at time. I’m the newbie in the company.

Mei: I bring this to the design team. So I never feel shy that if I couldn’t solve them myself, I should consult with other colleague. Then we start doing some root analysis. Why we need this survey? Because we want to discover problems. We don’t have a clear problem. We want to discover the problem. Then do something to also discover the problems without the researcher that can help us send the survey. Then we said, "Maybe we can do a diary study with UserTesting.com. We can set this up together."

Mei: So we did, in the end, a diary study. Those two methodologies actually serve the same purpose in the end. I guess you need to shift when you can, I think, maybe try to have another methodology that can give you the same insight or maybe. Also sometimes, just trust your gut feelings.

Vitaly: Sure.

Mei: If some data is not available, you can validate them later.

Vitaly: That’s right. But Mei, I have to ask a very provoking question at this point. And I’m sure that some of the listeners listening here will be, "What is this? What is he asking?" I do have to ask, do you think that chaos could also be a process? The reason why I’m asking is if you have a relatively small team. Imagine you have maybe two, three designers. You do not have this. And surely, we need to have research. We need to use some methods to make things work. Sometimes, you see companies trying to over organize things.

Vitaly: If you have a team of two or three, do you need daily stand-ups? Doesn’t seem necessary because people are there in the room talking all the time, anyway. It’s not like you have this big organization where you have five departments all doing different pieces and all that. Sometimes, I see companies feeling very comfortable in being extremely unorganized. Being chaotic. Not even having proper documentation and nothing. Obviously, the problem is that you actually end up with the knowledge being stuck with these people. If somebody leaves, that’s obviously an issue. Onboarding is a problem. But they feel like you can be very productive and very successful without having a proper process and pretty much a chaotic environment.

Mei: To be honest, I have to say that I agree with you.

Vitaly: Oh.

Mei: I think to not have a well established progress or being chaotic may be the norm for designers. Because we are creative beings. Sometimes, you get ideas or you discover something just randomly while understanding your customers, users. But I would say totally agree with you. If you have a small team and you are working very closely on a daily basis, you might not need to follow a design process super strictly. It’s more like, "we are in the understanding phase, then what can we do?" And we discuss together. It’s more like you just need the rough framework to guide you through. And the iteration will also be very fast-paced. You don’t need to go through everything then iterate again. Totally agree with. Another point, I feel like the design process is sometimes also more for the non-designers. Your product stakeholders in the organization or people who are not in your project or another designer who don’t have any background knowledge. It’s more for them to help to organize your self-process or just it’s for your own deliverable. Your ideas that work. To have it to communicate to the outside. That is what I have to say.

Vitaly: That makes perfect sense. Well, as we’re wrapping up here, I do have to ask you of course. But this is a question that I’m asking everyone and I’m really curious about your answer as well. Do you have a particular dream project? A really complicated challenge? A really complicated UX? I don’t know. Monolithic challenge that is probably so hard that it’s pretty, almost impossible to think about it? Just to give you an idea, some of my colleagues when trying to answer this question, they start thinking about, "Oh, I would love to design some, I don’t know. A deck or I don’t know. A control center for Rocket Science Center or anything like that."

Vitaly: Some other would say, "I just want to be able to work with United Nations." It goes really different ways. I’m just curious, do you have a particular dream project or dream task maybe or dream challenge that you would love to tackle one day?

Mei: I will say, I will go for the second direction. I really want to work for the sustainability topic or some project for NGOs because I have been spending my career working for E-commerce company. I really want to contribute to some non-profit organizations that, for example, sustainability or a turtles saving organization. I think what I can help them is my experience in E-commerce to convert people. Maybe I can convert more people doing the good stuff. That would be something I’d definitely love to work on in the future.

Vitaly: Maybe just totally ruining the arc, the story arc of the podcast. I do have to hook onto the thing that you mentioned about E-commerce because I’m just really curious. I spent quite a bit of time around E-commerce as well. Maybe you could share a few stories about things that you learned by working in E-commerce. Thing that’s how customers think or some important things to keep in mind when it comes to E-commerce UX in general.

Mei: I think what I have learned is your customers are smarter than you thought. That is what I have learned. Sometimes, you try to trick them. Sorry. Another dark part in UX I’m talking about. You think you can convert them somehow, but actually they know. They know what you are doing. It’s not the customer of 10 years ago on the E-commerce platform compared to right now. They’re very press sensitive. They compare with multiple competitors. They compare and they make the right decision for them. And that is also related to what we talk in the beginning of the podcast.

Mei: You have to focus on the long run to create a great experience for the long run. To bring them benefit in the long run because they understand everything. And you cannot. If you got them converted once, you might not get them converted the second time and they might leave you if they have really bad experience. I think right now the E-commerce world is really competitive, but also that is good for the customers because they have multiple choices and then they have learned everything. I think that is what I have learned from the E-commerce experience. The customers, they also grow as you grow.

Vitaly: We’ve been learning about UX and design today, but if there is one thing that I do have to ask, Mei because I know that Mei is very much interested in the something that maybe bothers or excites or inspires all of us. Who knows? I know that you’ve been playing with ChatGPT and AI in general, Mei. Do you see? I don’t know. Do you see this wonderful tool, AI as an opponent to us? Something that we need to fight or something that we’re going to embed in our daily workflow and just make the best use of it? How do you use AI today?

Mei: Very good questions. I think, we should see AI as our friends. We’re holding hands together.

Vitaly: Good friends.

Mei: And help us.

Vitaly: The best friends or?

Mei: Good friend.

Vitaly: Good friends.

Mei: Good friends for now before they replace our job, which will happen, I guess. Recently, I started using ChatGPT to write write documentations or write presentations for me. It’s still, you need to write down, get the key point and then ChatGPT will help you generate a good sentence. It saves your time as a designer. You could spend more time in Figma or creating new ideas or creating something or dreaming vision for your company for the coming three years. I think definitely AI saves our time and make sure we can concentrate on works that requires more creativity.

Vitaly: But I do have to ask a follow-up question. Do you think, Mei, that AI is creative?

Mei: I think, to some extent. They are creative based on basically data and stuff that already exist or they could find on the internet. But they might not be able to dream further. Maybe predict human in 10 years. But I’m not sure. I’m not a expert in AI. I would say they are creative to some extent, but it’s also up to us to think about, do we want them to be creative or not?

Vitaly: That’s a good question. Maybe, we can resolve this issue once and for good once we ask ChatGPT if it thinks it is creative. And if so, then it should better prove it to us. Well, if you, dear listener, would like to hear more from Mei, you can find her LinkedIn where she’s at, Mei Zhang, and also Medium. Medium.com/ThisisMei, if I’m not mistaken. Well, thank you so much for joining us today, Mei. Do you have any parting words of wisdom to the future generations who are going to listen to this very podcast 25 years from now thinking, "What are they talking about? Everything is AI anyway now."

Mei: What I want to share is definitely know AI is something not new, but something innovative in our generation right now. Designers are using ChatGPT to create their daily slides. But I would like to talk to the future generations to maybe being creative or follow your intuitations is something that cannot be replaced by AI. I think I really treasure. I think designers should be really treasured because we have the power that might not be able to replace by any machines and stuff because we are human. We are caring and we are always creative and we can connect the dots. That is something you should develop or treasure as a skill. I think that is something I would like to tell to the future generations.

]]>
hello@smashingmagazine.com (Drew McLellan)
<![CDATA[Make Your May Magnificent (2023 Wallpapers Edition)]]> https://smashingmagazine.com/2023/04/desktop-wallpaper-calendars-may-2023/ https://smashingmagazine.com/2023/04/desktop-wallpaper-calendars-may-2023/ Sun, 30 Apr 2023 15:00:00 GMT Let’s welcome May with some new wallpapers! For over twelve years, we’ve challenged you, our dear readers, to create wallpaper calendars for our monthly wallpapers posts. No matter if you’re into lettering, illustration, or photography, the series is the perfect opportunity to create a small artwork and share it with people from all around the world — to brighten up someone’s day, cater for some fresh inspiration, or to put your creative skills to the test. Of course, it wasn’t any different this time around.

In this collection, you’ll find desktop and mobile wallpapers for May 2023, created with love by artists and designers who took on the challenge. They all come in versions with and without a calendar and can be downloaded for free. As a little bonus goodie, we also compiled a selection of timeless May favorites from our archives at the end of this post. Thank you to everyone who tickled their ideas and shared their designs with us this month! Happy May!

  • You can click on every image to see a larger preview,
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
  • Submit a wallpaper!
    Did you know that you could get featured in our next wallpapers post, too? We are always looking for creative talent.
The Mushroom Band

“My daughter asked me to draw a band of mushrooms. Here it is!” — Designed by Vlad Gerasimov from Georgia.

ARRR2-D2

Designed by Ricardo Gimenes from Sweden.

Blast Off!

“Calling all space cadets, it’s time to celebrate National Astronaut Day! Today we honor the fearless explorers who venture beyond our planet and boldly go where no one has gone before.” — Designed by PopArt Studio from Serbia.

May Your May Be Magnificent

“May should be as bright and colorful as this calendar! That’s why our designers chose these juicy colors. We also have other options in our selection of May calendars.” — Designed by MasterBundles from Ukraine.

Chill Out

“Summer chill is a relaxed and carefree vibe that is often associated with the summer season. It is a time when people slow down and enjoy the warm weather. The atmosphere is laid-back and easy-going, with a focus on having fun and creating happy memories.” — Designed by Kasturi Palmal from India.

Moving Montains

“The month of May is a moving moment. I’m only writing this to set attention to the letter ‘M’! The ‘M’ inspired me, and my idea for the wallpaper was to make mountains, smooth mountains, of course.” — Designed by Philippe Brouard from France.

Think Outside The Fox

Designed by Ricardo Gimenes from Sweden.

Casablanca Style

“We spent spring in Casablanca. Enjoying the sunset while admiring the Hassan II Mosque.” — Designed by Veronica Valenzuela from Spain.

Me, Myself, And I

“The wallpaper is a reflection on Francisco Goya’s The Third of May, 1808, a painting with a powerful anti-war statement.” — Designed by Bhabna Basak from India.

Stop Child Labor

“Children should have the opportunity to learn, play, and grow in a safe environment and not be forced to work at a young age. As individuals, we can also make a difference by supporting companies that have strict policies against child labor and by raising awareness about this issue in our communities. By working together, we can help to create a world where every child has the opportunity to reach their full potential, free from the burden of labor. So I designed this wallpaper to raise my voice about this issue. Hope you like it!” — Designed by Hrishikesh Shome from India.

Oldies But Goodies

From bold to delicate, from minimalist to funny, below you’ll find some May favorites from our archives that are just too good to be forgotten. Which one is yours? (Please note that these wallpapers don’t come with a calendar.)

Hello May

“The longing for warmth, flowers in bloom, and new beginnings is finally over as we welcome the month of May. From celebrating nature on the days of turtles and birds to marking the days of our favorite wine and macarons, the historical celebrations of the International Workers’ Day, Cinco de Mayo, and Victory Day, to the unforgettable ‘May the Fourth be with you’. May is a time of celebration — so make every May day count!” — Designed by PopArt Studio from Serbia.

The Monolith

Designed by Ricardo Gimenes from Sweden.

Blooming May

“In spring, especially in May, we all want bright colors and lightness, which was not there in winter. Our designers decided to convey these feelings in the May calendar.” — Designed by MasterBundles from Ukraine.

Understand Yourself

“Sunsets in May are the best way to understand who you are and where you are heading. Let’s think more!” — Designed by Igor Izhik from Canada.

Poppies Paradise

Designed by Nathalie Ouederni from France.

Lake Deck

“I wanted to make a big painterly vista with some mountains and a deck and such.” — Designed by Mike Healy from Australia.

Today, Yesterday, Or Tomorrow

Designed by Alma Hoffmann from the United States.

April Showers Bring Magnolia Flowers

“April and May are usually when everything starts to bloom, especially the magnolia trees. I live in an area where there are many and when the wind blows, the petals make it look like snow is falling.” — Designed by Sarah Masucci from the United States.

Spring Gracefulness

“We don’t usually count the breaths we take, but observing nature in May, we can’t count our breaths being taken away.” — Designed by Ana Masnikosa from Belgrade, Serbia.

Make A Wish

Designed by Julia Versinina from Chicago, USA.

All Is Possible In May

“Edwin Way Teale once said that ‘[t]he world’s favorite season is the spring. All things seem possible in May.’ Now that the entire nature is clothed with grass and branches full of blossoms that will grow into fruit, we cannot help going out and enjoying every scent, every sound, every joyful movement of nature’s creatures. Make this May the best so far!” — Designed by PopArt Studio from Serbia.

Rainy Days

“Winter is nearly here in my part of the world and I think rainy days should be spent at home with a good book!” — Designed by Tazi Design from Australia.

Birds Of May

“Inspired by a little-known ‘holiday’ on May 4th known as ‘Bird Day’. It is the first holiday in the United States celebrating birds. Hurray for birds!” — Designed by Clarity Creative Group from Orlando, FL.

Be On Your Bike!

“May is National Bike Month! So, instead of hopping in your car, grab your bike and go. Our whole family loves that we live in our bike-friendly community. So, bike to work, to school, to the store, or to the park — sometimes it is faster. Not only is it good for the environment, but it is great exercise!” — Designed by Karen Frolo from the United States.

Add Color To Your Life!

“This month is dedicated to flowers, to join us and brighten our days giving a little more color to our daily life.” — Designed by Verónica Valenzuela from Spain.

Always Seek Knowledge

“‘As knowledge increases, wonder deepens.’ (Charles Morgan) So I tried to create an illustration based on this.” — Designed by Bisakha Datta from India.

Welcome May With An Ice Cream!

“May is the last month of spring, the weather is getting hotter every day and it starts to feel almost like summer. So, the best thing to cool ourselves and bring summer closer is to… welcome May with an ice cream!” — Designed by WebOlution from Greece.

Game Boy

Designed by Sander Geenen from Belgium.

Colorful

Designed by <a href=https://www.lotum.de>Lotum from Germany.

<a href=https://archive.smashing.media/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/e8daeb22-0fff-4b2a-b51a-2a6202c6e26e/may-12-colorful-31-full.png>

Tentacles

Designed by Julie Lapointe from Canada.

Sweet Lily Of The Valley

“The ‘lily of the valley’ came earlier this year. In France, we celebrate the month of May with this plant.” — Designed by Philippe Brouard from France.

Geo

Designed by Amanda Focht from the United States.

Love Japanese Food

Designed by Ricardo Gimenes from Sweden.

May The Force Be With You

“Yoda is my favorite Star Wars character and ‘may’ has funny double meaning.” — Designed by Antun Hirsman from Croatia.

Enjoy May!

“Springtime, especially Maytime, is my favorite time of the year. And I like popsicles — so it’s obvious isn’t it?” — Designed by Steffen Weiß from Germany.

Celestial Longitude Of 45°

“Lixia is the 7th solar term according to the traditional East Asian calendars, which divide a year into 24 solar terms. It signifies the beginning of summer in East Asian cultures. Usually begins around May 5 and ends around May 21.” — Designed by Hong, Zi-Cing from Taiwan.

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[Write Better CSS By Borrowing Ideas From JavaScript Functions]]> https://smashingmagazine.com/2023/04/write-better-css-borrow-ideas-javascript-functions/ https://smashingmagazine.com/2023/04/write-better-css-borrow-ideas-javascript-functions/ Fri, 28 Apr 2023 08:00:00 GMT I like to think of writing CSS, like writing functions that describe how your layouts respond to change. When we forget the principles of writing a good function, here’s some of what can happen:

  • We lose time.
    When we have to worry about side effects, changes take longer.
  • We create bugs.
    My favorite example is an online store where the “Buy” buttons were hidden due to misuse of viewport units.
  • We build fewer features.
    When changes are scary and time-consuming, they often don’t happen.

Let’s look at how we can borrow best practices and ideas from writing good JavaScript functions for writing CSS that is easy to use, free from unwanted side effects, and resilient to change.

Avoiding Unwanted Side Effects

When you change something in your system, it shouldn’t change something else by surprise. That’s as true for CSS as it is for JavaScript functions.

Let’s look at this arrow icon in a circle as an example:

It looks fine, but let’s say we want a narrower arrow icon:

Now the containing circle is squished! This is an example of an unwanted side effect. Using a narrower arrow ruins the shape of the circle.

If we inspect the element in DevTools, we can see that the shape of the containing circle depends on the size of the inner icon and the padding around the icon.

Ideally, the interior icon shouldn’t change the shape of the containing circle. Here’s a demo of how to fix the squished icon:

The CSS sets max-width: 900px on the container, and each card gets a little breathing room with padding: 5vw. This may look fine on the surface, but there’s a problem: the container has an upper bound while the padding doesn’t. As the screen gets wider, the content gets crushed.

See the Pen Example of padding crushing content [forked] by Yaphi.

Possible solutions include:

  • Using viewport or container breakpoints to keep the padding under control,
  • Using the CSS min() function to set an upper bound on the padding, or
  • Using fixed units, such as pixels, that won’t grow indefinitely with the window.

What these solutions have in common is that they account for what happens when the viewport width changes. Similarly, we can avoid many CSS problems by considering the layout as output and anticipating what could happen when the inputs change.

Ahmad Shadeed has a great name for this technique: Defensive CSS. The idea is that we can “future-proof” styles by thinking about them as inputs that output a UI and anticipating situations that would diminish the output’s usability.

Conclusion

Coding a layout isn’t about laying things out on a page but describing how they respond to change. For that reason, it’s risky to treat CSS like constants rather than functions.

Fortunately, the same ideas that help us write good functions can help us write good CSS, namely:

  • Avoid unwanted side effects.
  • Use the right parameters.
  • Consider how inputs change outputs.

What ties these ideas together is a question I hope you’ll ask yourself the next time you write CSS, How should this layout respond to change?

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Yaphi Berhanu)
<![CDATA[Accessible Target Sizes Cheatsheet]]> https://smashingmagazine.com/2023/04/accessible-tap-target-sizes-rage-taps-clicks/ https://smashingmagazine.com/2023/04/accessible-tap-target-sizes-rage-taps-clicks/ Thu, 27 Apr 2023 12:00:00 GMT Rage taps are annoying and frustrating. These wonderful occurrences in our interface when we need to tap twice or sometimes three times to continue our journeys. Of course, sometimes they happen because the website is too slow, but sometimes it’s the target size of interactive elements that is the culprit.

So how big should our interactive elements be these days? What would be a reliable size for icons, links and buttons — in navigation and on mobile? How do we make it more difficult for our users to make mistakes? Let’s take a look.

Note: You can find a whole video chapter on designing for touch in Smart Interface Design Patterns as well — along with 30 other chapters all around UX and design patterns.

Target Sizes Cheatsheet

One of the common recommendations for target sizes on mobile is 44×44px. This is a little bit misleading because screen pixels, or at least device-independent pixels (dips) are scaled to a multiple of the display resolution. So pixels are different on different screens, and when we have a conversation about sizes, we probably should be speaking about dips, rather than pixels.

Depending on where an element appears on the screen, it needs more or less padding. In general, we are very precise in our input in the center of the screen, but we are least precise on the edges of the screen (both on the top and at the bottom).

Accordion to Steven Hoober’s research in his book on Touch Design For Mobile Interfaces, to minimize rage taps, we need to aim for 11mm (or 31pt / 42px) on the top of the screen, and 12mm (or 34pt / 46px) at the bottom of the screen. In the center though, we could potentially go as low as 7mm (or 20pt / 27px). This includes both the width and padding of an interactive element.

How do point units translate to CSS pixels or Android/iOS units? Fortunately, Steven Hoober provides a helpful conversion table to help you translate from points to px and em units, Android SPs or DPs, iOS points and Windows DIP or px.

Not All Pixels Are The Same

As we’ve seen above, target sizes change depending on where components appear on the screen. It’s worth noting that according to the WCAG 2.1 AAA level requirements, all targets should measure at least 44 by 44px, except if the target is in a sentence or block of text. For such exceptions, we could be using 27px as a goal, but in general, the larger, the better.

For sticky menus at the top or bottom of the screen, we should probably aim for around 44–46px boxes, or preferably even more. However, for links that appear on the screen as the user scrolls down the page, we probably will be able to avoid most issues with smaller components.

This is also why we probably will be able to place at most five items in the bottom tabs on a mobile phone. Instead, we might need to use a bottom sheet that would slide up from down as an overlay on top of the screen.

Prefer “Actions” Button To Single Icons For Data Tables

Complex tables typically have hover actions that appear once a user starts hovering over a particular row. They typically include everything from highlight and export to move and delete.

In testing, showing icons on hover produces too many mistakes: not only do users often accidentally jump to a wrong row as they navigate horizontally towards the icons. They also make mistakes by accidentally clicking on the wrong spot and starting all over again.

To avoid rage clicks, it might be a good idea to test how well an “Actions” buttons or a “Split”-Button would perform instead. Indeed, that button could live on every row, would open on tap/click, and wouldn’t close automatically. It might not be ideal for every use case, but it definitely gives users more sense of control as they need to take action in a row.

Provide An Assistant For Complex Manipulations

With complex manipulation, such as rotation of an image, or selection of a small part of a larger area, we often rely on pinch and zoom or zoom in/out buttons. These options, of course, work, but they easily become a bit tedious to use for very precise manipulations — especially if used for a while.

Instead, we can attach a little handle to allow users to move their selection within the object faster and with more precision. This is how Tylko allows users to customize their shelves on mobile. Zooming is supported as well, but it’s not necessary to select one of the areas.

When Multiple Taps Are Better Than One

But what do we do if some tap areas have to be small? Perhaps we can’t reserve 27×27px for each icon — for example, when we suggest a color selection in an eCommerce site? Well, in that case, one option to consider would be to prompt a “proper” selection of colors with one additional tap. This might be a bit slower in interaction, but way more accurate.

Fewer rage clicks: Grønland Color Picker Microinteraction, designed by Mykolas Puodžiūnas. (Large preview) Always Maximize Clickable Area

Whenever possible, encapsulate the entire element, along with enough padding to ensure that you hit the magical 42–46px size to prevent rage taps for good. This typically means adding enough padding for icons but also preferring full-width or full-height bars for accordions and navigation.

Ahmad Shadeed presents a few useful examples of using spacing to increase clickable areas and prevent rage clicks. Any Lupe provides even more suggestions in her article on accessible target sizes.

Wrapping Up

When designing for touch today, we need to use at least 27×27px for small links or icons in the content area and at least 44×44px for icons at the top and at the bottom of the page.

Personally, I would always go up to 30×30px and 48×48px to make sure mistakes are really difficult to make. And, of course, always use full width or full height for the clickable areas. Hopefully, this will help us remove any possible rage taps from our websites altogether — and many of your users will sincerely appreciate it indeed.

You can find more details on navigation UX in the video library on Smart Interface Design Patterns 🍣 — with a live UX training that’s coming up in September this year.

Useful Resources

There are a few wonderful resources on accessible target sizes that might be helpful if you’d like to dive deeper in the topic:

]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[The Potentially Dangerous Non-Accessibility Of Cookie Notices]]> https://smashingmagazine.com/2023/04/potentially-dangerous-non-accessibility-cookie-notices/ https://smashingmagazine.com/2023/04/potentially-dangerous-non-accessibility-cookie-notices/ Mon, 24 Apr 2023 08:00:00 GMT No matter what your stance is on them, no matter what your perspective is on data protection, web advertisement, setting cookies, EU’s General Data Protection Regulation (GDPR), and privacy preferences, cookie consent widgets (or “cookie banners”) are a reality on the web today.

For this reason, it is worth looking into how accessible and usable these banners are or can be. They have become, for better or worse, a component of the majority of today’s websites. Even more, cookie banners are often the first thing a user encounters. And, of course, they are part of every site of a webpage once they are implemented.

Sometimes, cookie banners are a technical necessity because of the page’s feature set or because of advertisements on the page. Even more often, cookie banners are not built by the front-end team but are a ready-made solution, like UserCentrics or others.

Before I explain why the cookie banner deserves special attention regarding its accessibility, let’s quickly explain how the current gold standard of web accessibility, Web Content Accessibility Guidelines (WCAG) Version 2.1, works.

WCAG consists of principles, guidelines, and success criteria. The latter are testable steps to check against a webpage. For example:

  • “Is the main language of the document set?”
  • “Does this non-text content have a suitable text alternative?”
  • “Is it perceivable where my focus is when I’m using the web presence with the keyboard (or another tech that emulates keyboard presses)?”

You may have noticed that these are “yes or no” questions. Accordingly, this means that the final verdict of any given success criterion is either “pass” or “fail.”

Additionally, conformance to WCAG, as defined by the W3C (the governing body of the Web), means that none of its success criteria is allowed to “fail” when the whole document needs to be conformant:

“Conformance to a standard means that you meet or satisfy the ‘requirements’ of the standard. In WCAG 2.0, the ‘requirements’ are the Success Criteria. To conform to WCAG 2.0, you need to satisfy the Success Criteria, that is, there is no content which violates the Success Criteria.”

W3C Working Group Note

No nuance here. Going back to our cookie consent interface, this means that the banner (or any other component) alone has the potential to negatively affect the WCAG conformance of an entire web project.

WCAG conformance could be a big legal deal for many websites, whether part of the public sector in the European Union or the United States, as it is considered to fall under non-discrimination or market access laws or overall human rights to access to information. Webpages frequently must adhere to directives and regulations that directly or indirectly refer to WCAG, often its newest version, and conformance to its level AA standards. Therefore, all the following WCAG criteria are viewed through this lens, being fully aware that they are only a mere starting point when it comes to true web accessibility. On top of that, cookie consent interfaces are implemented on every subpage of a website, consequently harming accessibility and conformance throughout an entire website.

So, in order to not let a faulty cookie banner interface drag down your page’s conformance with accessibility laws and, more importantly, not exclude users from accessing and exercising their rights, let’s list what to look for, what to configure, and what to build properly in the first place.

Contrast Errors

This is especially relevant when it comes to important controls such as the setting of cookies or the overall acceptance of the recommended cookie set. It is crucial that form controls and text can be sufficiently perceived. Unsurprisingly, a solid contrast is also important for WCAG in general. Namely, in success criteria 1.4.3 and 1.4.11, both define contrast boundaries.

What To Do

When you are using a ready-made cookie management solution, try to influence the colors (if possible, potentially in your cookie vendor’s settings) and make sure interactive controls have sufficient color contrast.

Additionally, if your website relies on a dedicated contrast mode for WCAG conformance, check whether it extends to (or influences) the cookie management interface. I have seen cases in my accessibility auditor practice where this was not considered, and an inaccessible (often branded) color combination was used in the cookie interface, thinking the contrast mode takes care of every color-related violation. But the contrast setting of the website did not affect the third-party cookie banner due to it being, well, third-party and loaded from external sources or after the contrast mode had done its work, resulting in a “Fail” on WCAG’s contrast-related success criteria.

Pseudo Buttons

Another cookie banner issue can be one thing that is, unfortunately, an error pattern that you can find outside of cookie management: divs or spans with click events posing as links or buttons. These controls may be styled like buttons but lack the semantic information of a button.

On top of that, these controls usually aren’t keyboard focusable. Hence, many serious barriers and WCAG violations are occurring all at once. If we were about to imagine the most “pseudo” button, e.g., a div with a click handler, this would at least violate success criteria 2.1.1 (Keyboard), because it is neither reachable nor “activatable,” and 4.1.2 (Name, Role, Value) because it doesn’t “introduce” itself as a button and lacks a programmatic label.

What To Do

The easiest thing to do, assuming you have built the cookie management interface yourself, is to replace those above-mentioned pseudo buttons with real <button> elements because it provides semantics, focusability, and even keyboard event handlers for free. But even if we don’t talk literally about buttons, the pattern is the same: check your cookie prompt for interactive elements that are built with elements that are only styled to look like “the real thing” but consist of non-semantic divs and spans. This is a red flag for you to implement native interactive elements, like a, button, or input instead.

The situation gets a lot tougher, of course, when these semantic errors are in a third-party script and are, therefore, beyond your direct influence and control. Understandably, we have to leave the engineering side of things and start to dive into politics of some sort. If you work within an organization where the decision of cookie management infrastructure is outside your control, you have to escalate matters to your supervisors and managers (especially, but not only when your web projects have to adhere to accessibility laws).

Three abstract steps have to happen:

  1. Your organization has to become aware of the barrier and potential legal risk — “up” to the powers that have the influence to change technical decisions like these.
  2. As a consequence, the vendor that provided the faulty cookie banner has to be contacted about the issue.
  3. A form of pressure should be applied by your organization — not just for your own sake but also regarding all the other web pages where the faulty cookie banner negatively influences accessibility and conformance.

In a possible fourth step, your company or agency should reflect on its vending process for third-party services and the HTML (and possible barriers) that come with it.

Unlabeled Form Fields

When you think about it, the main user control that one could imagine for cookie management widgets is a form control: You can select which set of cookies you are willing to accept by interacting with checkboxes in a form element. And, of course, it is important that checkbox inputs are built in the correct way.

Alas, that is not always the case. While a checkbox and its label may visually appear adjacent, the checkbox can lack a programmatic label. This adds unnecessary confusion and barriers to the interface and also a failure of success criterion 1.3.1 when you look into the web accessibility standard.

What To Do

The most solid strategy to connect form inputs with their corresponding labels is to:

  1. Use a label element for the label (obviously).
  2. Establish an id on the respective input you want to label.
  3. Add a for attribute, filling it with the control’s id you created in the last step.

This also works for inputs of all types, like textareas and selects. Here’s an example of how a properly labeled checkbox could look:

<input type="checkbox" id="marketing-cookies" />
<label for="marketing-cookies">Accept marketing cookies</label>

If you can’t directly influence the HTML of the cookie banner’s code, the situation is comparable to the situation around pseudo buttons. Make sure that necessary pressure is applied to your cookie service provider to fix the problem. All of their customers will thank you for it, and even more so the people who visit their sites.

Broken Dialog Semantics (Or None At All)

Quite a few cookie banners are actually cookie dialogs, and of the modal kind. Modal, in the context of a dialog, means that such a window blocks everything but itself, leaving only itself accessible. That is, at least, the theory. But quite some cookie management dialogs do “want to be as aggressive,” presenting as a modal part of the interface but have no according semantics and behavior, which would violate WCAG success criterion 4.1.

What To Do

Up until recently, the recommendation was to build a dialog with WAI-ARIA roles and states and implement focus management yourself (or use Kitty Giraudel’s great a11y-dialog) component).

But the situation has (mostly) changed for the better. Lately, the native <dialog> element has matured to the point where it’s being recommended in most contexts as long as it is used reasonably. A great win for accessibility, in my opinion. The past way of building (modal) dialogs had so many moving parts and factors (roles, states, focus behaviors) to think about and implement manually that it was quite difficult to get it right. Now creating a dialog means using an aptly-named HTML element (and initializing it with .showModal() if you think the cookie dialog needs to be interface-blocking).

What I’ve written so far is, of course, also valid when you cannot influence a third party’s code, and what I wrote earlier about comparable situations and potential cookie consent barriers is valid as well. If you detect errors in the third-party script you are implementing (such as no focus trapping, no dialog role, no aria-modal="true" — and if everything else points towards “modalness”), escalate things internally and educate the decision-making powers about web accessibility consequences. Maybe even educate the third-party developers that things concerning modals have gotten a lot better recently.

Cookie Banners Are Hard To Find In The First Place

There are three typical places where you can usually find cookie consent interfaces, at least visually:

  1. As a modal dialog, i.e., in the middle or — more rarely — corners of the viewport;
  2. On top, sometimes in a fixed manner;
  3. At the bottom of the viewport, sometimes also somewhat positioned in a fixed way.

But what matters way more for some people is how easy it is to find, should they go on a hunt for it. A great way of presenting this very problem is a presentation that accessibility specialist Léonie Watson gave some time ago. Léonie is a seasoned screen reader user, and her presentation showcases a bunch of webpages and how the placement and “findability” of cookie banners influence the screen reader experiences, particularly as it is related to privacy. Hampering the ability to find important content in a document can, for example, negatively affect WCAG 1.3.2 (Meaningful Sequence).

What To Do

In Léonie’s presentation, the best practices for cookie notice findability become very clear, especially in the last example:

  • Place the banner preferably at the top of the document.
  • Use a headline in the cookie banner and make it either visible or visually hidden to help screen reader users “get a grasp about the webpage” and allow them to navigate by headings.
  • Build a bridge back to proper dialog semantics by making sure that if a dialog is meant to be the “exclusive” part of the interface, it uses appropriate semantic and state descriptions (see above for details).

When we’re talking about changing third-party code, I reckon you know the drill by now. Try to influence this code indirectly on the “political level” because direct control is not possible.

Conclusion

Hopefully, two things emerged while reading this article:

  1. Awareness of the issue, namely, that an often unloved stepchild interface element has the potential to make it harder for some people to manage their privacy settings and, on top of that, to even pose a legal risk.
  2. A sense of how you can possibly remediate barriers you encounter when working with a cookie management banner. The direct way is described in a certain detail in the details I provided earlier and often has to do with code, styling, or overall education on how to prevent this in the future. The indirect way leads to a path of either setting the consent interface up properly or influencing the inner and outer politics of your vendor scripts. And again, there is the aspect of educating everyone involved. This time, structured information may be aimed at the powers that be in your organization, showing them that their choice of service providers may have unintended consequences.

But regardless of whether you and your team manage to fix accessibility bugs directly or indirectly in your cookie consent interfaces, you can see their ubiquity and component architecture as an advantage. By getting the accessibility right in one place, you influence many other pages (or even foreign websites) for the better.

If you want to extend your horizon regarding the user experience side of cookie banners and learn about how you can actually turn privacy settings into a pleasant and respectful involvement with at least EU laws, please proceed to Vitaly’s smashing read, “Privacy UX: Better Cookie Consent Experiences”.

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Marcus Herrmann)
<![CDATA[Exploring The Potential Of Web Workers For Multithreading On The Web]]> https://smashingmagazine.com/2023/04/potential-web-workers-multithreading-web/ https://smashingmagazine.com/2023/04/potential-web-workers-multithreading-web/ Fri, 21 Apr 2023 10:00:00 GMT Web Workers are a powerful feature of modern web development and were introduced as part of the HTML5 specification in 2009. They were designed to provide a way to execute JavaScript code in the background, separate from the main execution thread of a web page, in order to improve performance and responsiveness.

The main thread is the single execution context that is responsible for rendering the UI, executing JavaScript code, and handling user interactions. In other words, JavaScript is “single-threaded”. This means that any time-consuming task, such as complex calculations or data processing that is executed, would block the main thread and cause the UI to freeze and become unresponsive.

This is where Web Workers come in.

Web Workers were implemented as a way to address this problem by allowing time-consuming tasks to be executed in a separate thread, called a worker thread. This enabled JavaScript code to be executed in the background without blocking the main thread and causing the page to become unresponsive.

Creating a web worker in JavaScript is not much of a complicated task. The following steps provide a starting point for integrating a web worker into your application:

  1. Create a new JavaScript file that contains the code you want to run in the worker thread. This file should not contain any references to the DOM, as it will not have access to it.
  2. In your main JavaScript file, create a new worker object using the Worker constructor. This constructor takes a single argument, which is the URL of the JavaScript file you created in step 1.
    const worker = new Worker('worker.js');
    
  3. Add event listeners to the worker object to handle messages sent between the main thread and the worker thread. The onmessage event handler is used to handle messages sent from the worker thread, while the postMessage method is used to send messages to the worker thread.
    worker.onmessage = function(event) {
      console.log('Worker said: ' + event.data);
    };
    worker.postMessage('Hello, worker!');
    
  4. In your worker JavaScript file, add an event listener to handle messages sent from the main thread using the onmessage property of the self object. You can access the data sent with the message using the event.data property.
    self.onmessage = function(event) {
      console.log('Main thread said: ' + event.data);
      self.postMessage('Hello, main thread!');
    };
    

Now let’s run the web application and test the worker. We should see messages printed to the console indicating that messages were sent and received between the main thread and the worker thread.

One key difference between Web Workers and the main thread is that Web Workers have no access to the DOM or the UI. This means that they cannot directly manipulate the HTML elements on the page or interact with the user.

Web Workers are designed to perform tasks that do not require direct access to the UI, such as data processing, image manipulation, or calculations.

Another important difference is that Web Workers are designed to run in a sandboxed environment, separate from the main thread, which means that they have limited access to system resources and cannot access certain APIs, such as the localStorage or sessionStorage APIs. However, they can communicate with the main thread through a messaging system, allowing data to be exchanged between the two threads.

Importance And Benefits Of Web Workers For Multithreading On The Web

Web Workers provide a way for web developers to achieve multithreading on the web, which is crucial for building high-performance web applications. By enabling time-consuming tasks to be executed in the background, separate from the main thread, Web Workers improve the overall responsiveness of web pages and allow for a more seamless user experience. The following are some of the importance and benefits of Web Workers for multithreading on the Web.

Improved Resource Utilization

By allowing time-consuming tasks to be executed in the background, Web Workers make more efficient use of system resources, enabling faster and more efficient processing of data and improving overall performance. This is especially important for web applications that involve large amounts of data processing or image manipulation, as Web Workers can perform these tasks without impacting the user interface.

Increased Stability And Reliability

By isolating time-consuming tasks in separate worker threads, Web Workers help to prevent crashes and errors that can occur when executing large amounts of code on the main thread. This makes it easier for developers to write stable and reliable web applications, reducing the likelihood of user frustration or loss of data.

Enhanced Security

Web Workers run in a sandboxed environment that is separate from the main thread, which helps to enhance the security of web applications. This isolation prevents malicious code from accessing or modifying data in the main thread or other Web Workers, reducing the risk of data breaches or other security vulnerabilities.

Better Resource Utilization

Web Workers can help to improve resource utilization by freeing up the main thread to handle user input and other tasks while the Web Workers handle time-consuming computations in the background. This can help to improve overall system performance and reduce the likelihood of crashes or errors. Additionally, by leveraging multiple CPU cores, Web Workers can make more efficient use of system resources, enabling faster and more efficient processing of data.

Web Workers also enable better load balancing and scaling of web applications. By allowing tasks to be executed in parallel across multiple worker threads, Web Workers can help distribute the workload evenly across multiple cores or processors, enabling faster and more efficient processing of data. This is particularly important for web applications that experience high traffic or demand, as Web Workers can help to ensure that the application can handle an increased load without impacting performance.

Practical Applications Of Web Workers

Let us explore some of the most common and useful applications of Web Workers. Whether you’re building a complex web application or a simple website, understanding how to leverage Web Workers can help you improve performance and provide a better user experience.

Offloading CPU-Intensive Work

Suppose we have a web application that needs to perform a large, CPU-intensive computation. If we perform this computation in the main thread, the user interface will become unresponsive, and the user experience will suffer. To avoid this, we can use a Web Worker to perform the computation in the background.

// Create a new Web Worker.
const worker = new Worker('worker.js');

// Define a function to handle messages from the worker.
worker.onmessage = function(event) {
  const result = event.data;
  console.log(result);
};

// Send a message to the worker to start the computation.
worker.postMessage({ num: 1000000 });

// In worker.js:

// Define a function to perform the computation.
function compute(num) {
  let sum = 0;
  for (let i = 0; i < num; i++) {
    sum += i;
  }
  return sum;
}

// Define a function to handle messages from the main thread.
onmessage = function(event) {
  const num = event.data.num;
  const result = compute(num);
  postMessage(result);
};

In this example, we create a new Web Worker and define a function to handle messages from the worker. We then send a message to the worker with a parameter (num) that specifies the number of iterations to perform in the computation. The worker receives this message and performs the computation in the background. When the computation is complete, the worker sends a message back to the main thread with the result. The main thread receives this message and logs the result to the console.

This task involves adding up all the numbers from 0 to a given number. While this task is relatively simple and straightforward for small numbers, it can become computationally intensive for very large numbers.

In the example code we used above, we passed the number 1000000 to the compute() function in the Web Worker. This means that the compute function will need to add up all the numbers from 0 to one million. This involves a large number of additional operations and can take a significant amount of time to complete, especially if the code is running on a slower computer or in a browser tab that is already busy with other tasks.

By offloading this task to a Web Worker, the main thread of the application can continue to run smoothly without being blocked by the computationally intensive task. This allows the user interface to remain responsive and ensures that other tasks, such as user input or animations, can be handled without delay.

Handling Network Requests

Let us consider a scenario where a web application needs to initiate a significant number of network requests. Performing these requests within the main thread could cause the user interface to become unresponsive and result in a poor user experience. In order to prevent this issue, we can utilize Web Workers to handle these requests in the background. By doing so, the main thread remains free to execute other tasks while the Web Worker handles the network requests simultaneously, resulting in improved performance and a better user experience.

// Create a new Web Worker.
const worker = new Worker('worker.js');

// Define a function to handle messages from the worker.
worker.onmessage = function(event) {
  const response = event.data;
  console.log(response);
};

// Send a message to the worker to start the requests.
worker.postMessage({ urls: ['https://api.example.com/foo', 'https://api.example.com/bar'] });

// In worker.js:

// Define a function to handle network requests.
function request(url) {
  return fetch(url).then(response => response.json());
}

// Define a function to handle messages from the main thread.
onmessage = async function(event) {
  const urls = event.data.urls;
  const results = await Promise.all(urls.map(request));
  postMessage(results);
};

In this example, we create a new Web Worker and define a function to handle messages from the worker. We then send a message to the worker with an array of URLs to request. The worker receives this message and performs the requests in the background using the fetch API. When all requests are complete, the worker sends a message back to the main thread with the results. The main thread receives this message and logs the results to the console.

Parallel Processing

Suppose we have a web application that needs to perform a large number of independent computations. If we perform these computations in sequence in the main thread, the user interface will become unresponsive, and the user experience will suffer. To avoid this, we can use a Web Worker to perform the computations in parallel.

// Create a new Web Worker.
const worker = new Worker('worker.js');

// Define a function to handle messages from the worker.
worker.onmessage = function(event) {
  const result = event.data;
  console.log(result);
};

// Send a message to the worker to start the computations.
worker.postMessage({ nums: [1000000, 2000000, 3000000] });

// In worker.js:

// Define a function to perform a single computation.
function compute(num) {
  let sum = 0;
  for (let i = 0; i < num; i++) {
    sum += i;
}
  return sum;
}

// Define a function to handle messages from the main thread.
onmessage = function(event) {
  const nums = event.data.nums;
  const results = nums.map(compute);
  postMessage(results);
};

In this example, we create a new Web Worker and define a function to handle messages from the worker. We then send a message to the worker with an array of numbers to compute. The worker receives this message and performs the computations in parallel using the map method. When all computations are complete, the worker sends a message back to the main thread with the results. The main thread receives this message and logs the results to the console.

Limitations And Considerations

Web workers are a powerful tool for improving the performance and responsiveness of web applications, but they also have some limitations and considerations that you should keep in mind when using them. Here are some of the most important ones:

Browser Support

Web workers are supported in all major browsers, including Chrome, Firefox, Safari, and Edge. However, there are still some other browsers that do not support web workers or may have limited support.

For a more extensive look at browser support, see Can I Use.

It is important that you check out the browser support for any feature before using them in production code and test your application thoroughly to ensure compatibility.

Limited Access To The DOM

Web workers run in a separate thread and do not have access to the DOM or other global objects in the main thread. This means you cannot directly manipulate the DOM from a web worker or access global objects like windows or documents.

To work around this limitation, you can use the postMessage method to communicate with the main thread and update the DOM or access global objects indirectly. For example, you can send data to the main thread using postMessage and then update the DOM or global objects in response to the message.

Alternatively, there are some libraries that help solve this issue. For example, the WorkerDOM library enables you to run the DOM in a web worker, allowing for faster page rendering and improved performance.

Communication Overhead

Web workers communicate with the main thread using the postMessage method, and as a result, could introduce communication overhead, which refers to the amount of time and resources required to establish and maintain communication between two or more computing systems, such as between a Web Worker and the main thread in a web application. This could result in a delay in processing messages and potentially slow down the application. To minimize this overhead, you should only send essential data between threads and avoid sending large amounts of data or frequent messages.

Limited Debugging Tools

Debugging Web Workers can be more challenging than debugging code in the main thread, as there are fewer debugging tools available. To make debugging easier, you can use the console API to log messages from the worker thread and use browser developer tools to inspect messages sent between threads.

Code Complexity

Using Web Workers can increase the complexity of your code, as you need to manage communication between threads and ensure that data is passed correctly. This can make it more difficult to write, debug, and maintain your code, so you should carefully consider whether using web workers is necessary for your application.

Strategies For Mitigating Potential Issues With Web Workers

Web Workers are a powerful tool for improving the performance and responsiveness of web applications. However, when using Web Workers, there are several potential issues that can arise. Here are some strategies for mitigating these issues:

Minimize Communication Overhead With Message Batching

Message batching involves grouping multiple messages into a single batch message, which can be more efficient than sending individual messages separately. This approach reduces the number of round-trips between the main thread and Web Workers. It can help to minimize communication overhead and improve the overall performance of your web application.

To implement message batching, you can use a queue to accumulate messages and send them together as a batch when the queue reaches a certain threshold or after a set period of time. Here’s an example of how you can implement message batching in your Web Worker:

// Create a message queue to accumulate messages.
const messageQueue = [];

// Create a function to add messages to the queue.
function addToQueue(message) {
  messageQueue.push(message);

  // Check if the queue has reached the threshold size.
  if (messageQueue.length >= 10) {
    // If so, send the batched messages to the main thread.
    postMessage(messageQueue);

    // Clear the message queue.
    messageQueue.length = 0;
  }
}

// Add a message to the queue.
addToQueue({type: 'log', message: 'Hello, world!'});

// Add another message to the queue.
addToQueue({type: 'error', message: 'An error occurred.'});

In this example, we create a message queue to accumulate messages that need to be sent to the main thread. Whenever a message is added to the queue using the addToQueue function, we check if the queue has reached the threshold size (in this case, ten messages). If so, we send the batched messages to the main thread using the postMessage method. Finally, we clear the message queue to prepare it for the next batch.

By batching messages in this way, we can reduce the overall number of messages sent between the main thread and Web Workers,

Avoid Synchronous Methods

These are JavaScript functions or operations that block the execution of other code until they are complete. Synchronous methods can block the main thread and cause your application to become unresponsive. To avoid this, you should avoid using synchronous methods in your Web Worker code. Instead, use asynchronous methods such as setTimeout() or setInterval() to perform long-running computations.

Here is a little demonstration:

// In the worker
self.addEventListener('message', (event) => {
  if (event.data.action === 'start') {
    // Use a setTimeout to perform some computation asynchronously.
    setTimeout(() => {
      const result = doSomeComputation(event.data.data);

      // Send the result back to the main thread.
      self.postMessage({ action: 'result', data: result });
    }, 0);
  }
});

Be Mindful Of Memory Usage

Web Workers have their own memory space, which can be limited depending on the user’s device and browser settings. To avoid memory issues, you should be mindful of the amount of memory your Web Worker code is using and avoid creating large objects unnecessarily. For example:

// In the worker
self.addEventListener('message', (event) => {
  if (event.data.action === 'start') {
    // Use a for loop to process an array of data.
    const data = event.data.data;
    const result = [];

    for (let i = 0; i < data.length; i++) {
      // Process each item in the array and add the result to the result array.
      const itemResult = processItem(data[i]);
      result.push(itemResult);
    }

    // Send the result back to the main thread.
    self.postMessage({ action: 'result', data: result });
  }
});

In this code, the Web Worker processes an array of data and returns the result to the main thread using the postMessage method. However, the for loop used to process the data may be time-consuming.

The reason for this is that the code is processing an entire array of data at once, meaning that all the data must be loaded into memory at the same time. If the data set is very large, this can cause the Web Worker to consume a significant amount of memory, potentially exceeding the memory limit allocated to the Web Worker by the browser.

To mitigate this issue, you can consider using built-in JavaScript methods like forEach or reduce, which can process data one item at a time and avoid the need to load the entire array into memory at once.

Browser Compatibility

Web Workers are supported in most modern browsers, but some older browsers may not support them. To ensure compatibility with a wide range of browsers, you should test your Web Worker code in different browsers and versions. You can also use feature detection to check if Web Workers are supported before using them in your code, like this:

if (typeof Worker !== 'undefined') {
  // Web Workers are supported.
  const worker = new Worker('worker.js');
} else {
  // Web Workers are not supported.
  console.log('Web Workers are not supported in this browser.');
}

This code checks if Web Workers are supported in the current browser and creates a new Web Worker if they are supported. If Web Workers are not supported, the code logs a message to the console indicating that Web Workers are not supported in the browser.

By following these strategies, you can ensure that your Web Worker code is efficient, responsive, and compatible with a wide range of browsers.

Conclusion

As web applications become increasingly complex and demanding, the importance of efficient multithreading techniques — such as Web Workers — is likely to increase. Web Workers are an essential feature of modern web development that allows developers to offload CPU-intensive tasks to separate threads, improving application performance and responsiveness. However, there are significant limitations and considerations to keep in mind when working with Web Workers, such as the lack of access to the DOM and limitations on the types of data that can be passed between threads.

To mitigate these potential issues, developers can follow strategies as mentioned earlier, such as using asynchronous methods and being mindful of the complexity of the task being offloaded.

Multithreading with Web Workers is likely to remain an important technique for improving web application performance and responsiveness in the future. While there are other techniques for achieving multithreading in JavaScript, such as using WebSockets or SharedArrayBuffer, Web Workers have several advantages that make them a powerful tool for developers.

Adopting more recent technology such as WebAssembly may open up new opportunities for using Web Workers to offload even more complex and computationally-intensive tasks. Overall, Web Workers are likely to continue to evolve and improve in the coming years, helping developers create more efficient and responsive web applications.

Additionally, many libraries and tools exist to help developers work with Web Workers. For example, Comlink and Workerize provide a simplified API for communicating with Web Workers. These libraries abstract away some of the complexity of managing Web Workers, making it easier to leverage their benefits.

Hopefully, this article has given you a good understanding of the potential of web workers for multithreading and how to use them in your own code.

]]>
hello@smashingmagazine.com (Sarah Oke Okolo)