High-Level Development Trends Apple Is Hinting to at WWDC 2021
In this blog
- Async/Await syntax in Swift is the new hotness.
- And Combine isn't quite the darling it once was.
- AsyncSequence may be coming to eat Combine's lunch.
- SwiftUI is the future, and now probably the present, for a lot of new development.
- But UIKit is continuing to keep its pace.
- Get used to getting explicit about error handling in your apps.
- Finished your core app? Great! You're not done.Â
- Get used to not knowing as much about unregistered app users.
- You may need to start rethinking the login and account management flows in your apps.
- Apple TV and TVOS continue to be products in the Apple lineup.
- The line between "an iOS app" and "a Mac app" is getting narrower and narrower.
- What do Spatial Audio, dynamic visual Maps improvements and Object Capture all have in common?
- Download
Async/Await syntax in Swift is the new hotness.
Alright, this one wasn't exactly a deeply coded message. Apple was about as explicit as they could be that Swift's new Async/Await syntax improvements, available in Swift 5.5, are THE new way of writing and structuring asynchronous code. In fact, Apple went to pretty extensive lengths this year to dog its own prior guidance about techniques for writing asynchronous code — even techniques that were only introduced a couple of years ago with Combine. But from their presented code comparisons, it was pretty easy to appreciate the extent to which Async/Await code is more readable, easier to reason about, harder to mess up and just plain cleaner than completion handlers, and even WWDC 2019's new Combine framework.
And Combine isn't quite the darling it once was.
The afore-mentioned Combine framework got almost no love this WWDC. And honestly, it didn't really advance that much last year at WWDC '20. Combine was introduced just a couple of short years ago as Apple's take on Functional Reactive Programming (FRP) style that seemed to be gaining a lot of traction with indie frameworks like RxSwift, and when it was introduced at WWDC '19, Apple seemed to be very bought-in on it. They released Publisher-based APIs that layered on top of many asynchronous systems flows such as Notification Center and URLSession. In addition, Combine seemed to be one of the foundational underpinnings of SwiftUI, allowing your UI code to be a visual representation of application state, and react to changes in that state.
But outside of SwiftUI, Combine never really got a lot of widespread usage except for one common use case — network requests. While it's nearly empirically true that the Combine syntax for creating a URLSession dataTaskPublisher is much more succinct than its old completion handler counterpart syntax, Combine wasn't quite the right tool for that job. FRP frameworks' essence is about dealing with changing data over time. Network requests, however, are generally one-and-done operations. Combine did a great job dealing with the asynchronicity well and allowed for subsequent operations on the retrieved data to be chained on very cleanly, but it did come with a cost of managing context switching between a Publisher of a given type and being able to use that given type as well as managing AnyCancellables for every Publisher. URLSession's new async dataTask method brings concise, clear, linear syntax and error handling techniques that should be old hat for Swift developers by now.
Whereas in the past two years since Combine's introduction, we may have refactored our old completion handler code to instead use a Combine Publisher, now it seems clear that we can achieve even better gains in code clarity by instead using Async/Await.
AsyncSequence may be coming to eat Combine's lunch.
As mentioned above, Functional Reactive Programming, and Combine in particular, really shines when it encapsulates code that deals with data that changes over time and with streams of data being delivered over time. Async/Await alone doesn't deal with that use case directly. But another newly introduced cousin does — AsyncSequence. Again, AsyncSequence excels in the same way that Async/Await does in that it allows you to write code to handle changes over time in straightforward, familiar syntax. It's as easy as iterating over a for loop, with the caveat that you don't know when each iteration will happen. Very. Easy.
Now there will still be a handful of use cases where combine will still be the right tool for the job (e.g. auto-searching for something by reacting to individual characters typed into a Text box while debouncing and waiting for a user's pause in keystrokes, layering on robust retry logic to deal with the flakiness of network requests, etc.), but I have a feeling those will be fewer and farther between. In the coming years, when most of us will be refactoring away much of our Combine code (as well as the remaining completion handler code) to Async/Await syntax and using AsyncSequence, maybe an even newer, more succinct style will emerge to more cleanly handle those last remaining Combine use cases as well.
SwiftUI is the future, and now probably the present, for a lot of new development.
Apple is betting very heavily on SwiftUI being the future of Apple development. That was never more clear than at last year's WWDC when it was announced that Widget development (one of iOS 14's tentpole new features) could only be done using SwiftUI. While this year didn't bring any such SwiftUI exclusive user-facing features, if anyone was keeping a SwiftUI vs. UIKit scorecard, it was pretty clear which UI framework got the most attention in presentations — SwiftUI. While it could be argued that the attention disparity is because SwiftUI has so much more maturing to do and features to gain in order to catch up to UIKit, I think it's something different.
SwiftUI's declarative code style is so much easier to mentally parse and reason about than its imperative counterpart in UIKit. Apple lately has put a lot of emphasis on the concept of local reasoning, and SwiftUI's syntax does such a great job of bundling everything you may want to know about a UI control into a single place in code, that it delivers on local reasoning in spades. And when coding concepts are easier to reason about, this makes coding more accessible to more aspiring developers, which means more apps on the App Store, which to Apple means... I'll let you fill in the blanks.
This year's additions to SwiftUI's APIs and capabilities have addressed many of the big pain points that naysayers have thrown out. And given that the initial minimum deployment target for SwiftUI was iOS 13, it's getting pretty difficult to make the case that developers shouldn't use it because there aren't enough users on iOS 13+ devices. Go ahead. Start thinking about how you could use SwiftUI in new features as they come up. It feels good to be on the same train Apple themselves seem to be on.
But UIKit is continuing to keep its pace.
It would be one thing if SwiftUI was getting all the attention it is and UIKit was completely stagnant. Then the writing would definitely be on the wall for all of us to see. But it's hard to see it entirely that way. This year, Apple had sessions on some new behavior and styling options for UIButtons, table and collection view performance enhancements, implementing some of the newly introduced iPadOS multitasking and productivity enhancements.
This year Apple also provided new APIs to allow apps to present sheet views that only cover the bottom half of your app's screen on iOS. This has been something that developers have long been implementing via custom controls, but have been yearning to have a first-party solution available. And curiously, this new capability is only available in UIKit. SwiftUI devs still need to cobble it together manually. It seems clear that the UIKit and SwiftUI teams at Apple are not necessarily working together to provide all new features in lockstep. It could even be argued that some of the new UIKit list and collection view cell refresh APIs introduced this year were done to catch up with the way that SwiftUI handles data refreshes.
So should you use UIKit or SwiftUI for your new project? Yes. Definitely one of those two. 😜 Apple has done what it can to reassure legacy iOS developers that UIKit isn't going anywhere soon, and they can continue to leverage the knowledge and techniques they've built up over the years for apps into the future. At the same time, it would be foolish to ignore where Apple seems to be guiding new developers, and that Apple seems proud to tout how much of the new features introduced these past couple of years were implemented internally using SwiftUI. It's probably about time to at least open up to a mix of the two UI frameworks in your apps.
Get used to getting explicit about error handling in your apps.
If I had a nickel for every time I saw the keyword throws
in a code sample slide this year... I might be able to pick up one of those slick M1 iPad Pros. Structured Error Handling is a primary side-effect of the new Structured Concurrency Model introduced in Swift 5.5 with Async/Await. Throwing errors is back to being the norm, and by the looks of it, Apple is delighted for us to be able to be here. For a while now, there have been a number of weird ways we have had to implement error handling in our Swift code in different use cases.
With completion handler-style callbacks, it was impossible to throw errors, resulting in workarounds like the Result type that could bundle a potential return type or a potential error condition into the callback closure. In Combine code, Publishers advertised both the published value type as well as the type of error they could potentially publish as their generic types, right out front. Then handling errors from a Publisher involved handling the stream's completion event and then checking whether it was an error or a regular completion. It was all kind of wonky syntax, but it got the job done.
But none of that was the simple do { try ... } catch { ... }
syntax we all learned in Swift 101. And now, it seems we may finally be able to return to that simpler time with the introduction of Async/Await syntax. Asynchronous functions are able to return to simpler error throwing techniques, and callers are able to return to simpler error handling techniques. So now that it's easy to do things simply and be explicit about error conditions in our code, we need to restart thinking that way from the outset. Whereas in completion handlers and Combine code, it was sometimes easy to take the shortcut of swallowing errors or returning empty or nil versions of their return types just to make things simpler, now we can afford to be more explicit in our functions' error conditions and push the responsibility of error handling to their callers.
Finished your core app? Great! You're not done.
Shortcuts, Siri Intents, Mac Catalyst, Widgets, Watch Complications, App Clips, Spotlight Search... I'm out of breath.
OS-level integrations seemed like an unspoken theme this year at WWDC. It's not necessarily a new trend, but I think the number of potential ways of "going beyond the app" has reached a critical mass for me to really take notice this year. With Shortcuts coming to the Mac this year, there was a renewed spotlight on breaking down the functionality of your app into many small independent features that users can take advantage of outside the app itself. Conceptually, that same line of thinking applies to Widgets, App Clips, and Watch Apps and Complications. If you've got a data-heavy app, making that data indexable via Spotlight opens up new avenues for users to get to the information they're after without initially opening up your app.
All of these features are potentially strong value adds for users, albeit for a significantly smaller subset of your wider user base. So even if your app has succeeded in providing all the business functionality you might have wanted, know that there's room in your backlog to provide an above-and-beyond experience to your users.
Get used to not knowing as much about unregistered app users.
At this point, when I see a stern-faced Craig Federighi say "At Apple, we believe privacy is a fundamental human right," I believe it. This year, Apple continues to redouble their efforts to allow their customers to safely and privately use their devices online without unknowingly exposing that activity to those that are keen to gobble up every bit of information about you to resell to advertisers or identity thieves. New this year are ways of using the Internet without exposing what sites you're visiting anyone snooping in between, or where you came from when you get there. Trackers can scratch IP addresses off their list of means of somewhat reliably correlating who you are with past activities.
This means businesses are really going to need to think about living in a world where you truly only know as much information about users as they tell you. Gone may be the days of piecing together rough pictures about unregistered users from device id, IP address, cookies, etc. It may mean we'll soon live in a world where (shocker!) the only business value that brands get from their customers is buying and using their actual product.
You may need to start rethinking the login and account management flows in your apps.
There were a couple of significant announcements that came out at WWDC '21 that could have some significant short- and long-term effects on how app developers think about managing users' login and account management flows. The first was a newly-modified App Review guideline that states:
It's a great and welcome concept. However, I can't think of a single app that I use today that gives any prominence to the user interface for me to delete my account/profile for their service. It will be very interesting to see the extent to which this new guideline is enforced, as it could really cause a lot of work for the back-end systems that support our apps.
The other involves Apple's continued push to drive us into a password-less world. Sign in With Apple
has taken us partially down this idealistic road. However, there is an additional authentication standard that Apple is throwing its weight behind. It's based on WebAuthn, and it essentially allows users to use their Apple device to securely exchange private cryptographic keys in a public/private key exchange instead of relying on the standard username and password. This allows websites or back-ends to only store users' public keys, and not store passwords in their databases. No passwords to remember for users, no passwords to store for companies. On its face, it sounds like a potentially huge win for security. Though there are some unanswered questions regarding trickier flows such as if one needs to authenticate to a service without their Apple device, but it will be interesting to see how this pans out.
Regardless, if you're not already thinking about allowing users to authenticate by some other means than username and password, you should be. Things could be getting much better for your business and for your users. Don't disappoint.
Apple TV and TVOS continue to be products in the Apple lineup.
If there's anything I've learned over the crazy last year and a half, it's that there is no shortage of streaming content out there to entertain us. I'm a self-diagnosed Apple addict, so naturally, my favorite way to consume said media is on my 100" projection screen, driven by an Apple TV 4K. In some ways, the tvOS interface is just leaps and bounds better aesthetically than you can get with a standard Smart TV interface. At the same time, some of the apps that are created for tvOS can be absolutely maddening to use. And some of the inconsistency in apps has to do with the scattered and incomplete development story Apple has given to developers. Scrolling list performance can be very nice if you accomplish it in just the right way, but it can also be a dumpster fire using SwiftUI to create interfaces. There also appears to be some newly horrible way that some tvOS developers have found that renders even basic up/down/left/right directional gestures only about 30 percent reliable. Those apps can be downright maddening to use. </rant>
Given the somewhat messy state of tvOS for developers, it was pretty surprising to see almost zero attention given to the platform at this year's WWDC. I think a lot of tvOS developers were anticipating that this could be the year that APIs were cleaned up and made reliable. We got shiny updated Apple TV 4K hardware released a couple of months ago. That gave some hope that Apple still had some love to give to the ecosystem, and that some new attention to the software would follow the new hardware. 'Twas not to be.
What does that all mean? Maybe Apple thinks tvOS is already perfect. Maybe the install-base of Apple TV is too small to realistically get any more attention and innovation than it does. Maybe it will be interesting to see what happens if developers give creating tvOS apps as much attention as Apple does getting it back on track.
The line between "an iOS app" and "a Mac app" is getting narrower and narrower.
In the last couple of years, Apple has introduced Mac Catalyst — a framework that allows developers to leverage their existing iPad apps' codebases to serve as a huge first step towards also creating a native Mac app, new M1 Apple Silicon Macs — able to run almost any iPhone and iPad apps "natively" on the mac and downloaded right from a version of the iOS App Store available right on the Mac, and most recently the new iPad Pro — powered with the same M1 chip as the new Macs, blurring the lines of distinction between laptop portability and tablet portability even more.
Thus far, Apple's Human Interface Guidelines have steered developers towards the best ways of providing enjoyable ways of interacting with apps in very distinct usage modalities, from the micro (Apple Watch) to the every day (iPhone) to rich content consumption and light workflow (iPad) all the way up to the line of Mac laptops and desktops computers. This year, Apple continued promoting Mac Catalyst to developers, providing more tips and techniques to tweak iPad apps to offer more information density and more precise mouse/trackpad-based interactions.
It certainly feels like there is room for another shoe to drop given Apple is now driving both iPads and Macs with the same M1 hardware, and iPads and Macs both running slight variants of the same apps via Catalyst. Would it be surprising if the iPhone 13 presumably released this fall would run on the M1 as well? Or maybe just the Pro models? If all of these devices were essentially equally capable of running the most intensive workloads and apps from a hardware perspective, and developers have the tools to make tweaks in order to facilitate the interaction techniques optimized for each respective platform, could a unified development model converge development for all platforms in the near future? Stay tuned.
What do Spatial Audio, dynamic visual Maps improvements and Object Capture all have in common?
Things are becoming very 3D in the Apple world. There's an awful lot of speculation these days that Apple is secretly working on Augmented Reality glasses. Who knows whether that's true.
Okay, there certainly are people at Apple that know if that's true. What all of us can see as true is that Apple seems to want very much to focus developers' collective attention at envisioning a world where their devices facilitate user interactions that go beyond punching a flat glass screen with our fingers. We can already use our iPads and iPhones to look through our displays and see a pseudo-reality where things appear that aren't really there. Newly announced Object Capture photogrammetry techniques will make 3D modeling of real-world objects more accessible for integrating into our apps. AirPods Pro and Max are providing more opportunities for us to delight in our brains being tricked into hearing things behind and all around us that aren't really there. And coming enhancements to Apple Maps will do even more to get us used to interacting with spatial information like driving and walking directions in dramatically more immersive ways.
Are these all building blocks that will lead to Apple's next breakthrough glasses product? Or is that speculation just another way that Apple is making us see something that isn't really there?