Emily Waite writing for Wired.com
The difference between getting news from an RSS reader and getting it from Facebook or Twitter or Nuzzel or Apple News is a bit like the difference between a Vegas buffet and an a la carte menu. In either case, you decide what you actually want to consume. But the buffet gives you a whole world of options you otherwise might never have seen.
That’s an excellent analogy.
RSS readers obviously have their own shortcomings as well. The firehose approach can easily overwhelm, especially when multiple outlets all publish the same news at the same time.
So introducing filters in Yeti was definitely a good idea.
And some more…
The readers all have settings to help cope with these issues to varying degrees, where possible; it’s just a matter of how many hours you want to spend shaping your RSS bonsai.
And thus Yeti’s approach to get you started with the basics, yet empower you with power tools.
I believe I’m doing the right thing here. Given how much attention this post has garnered over the last couple of hours, I think the timing is just right for Yeti. I guess I’ll just have to go right into the 6th gear and ramp up production.
Another Friday, another alpha release. This is a House-keeping build required for the upcoming single feeds: Unread and Bookmarks.
- Image settings now take effect. If the source does not provide alternate image sizes, the default url is used (which could be a big image).
- Removing feeds
- Searching for an article has been optimised to run smoothly on older devices.
- When feeds are loaded on app launch, it’ll load the full batch, and then onwards, only load new changes. This largely improves caching and networking performance.
- The API has also been updated to not return responses if the local cache matches the server response.
- Tweet rendering (only works if Tweets were embedded and not quoted)
- Feed listings now update when you move to the next or previous article using the accessibility view. This also enables endless scroll on iPads (maybe even on iPhones, but I haven’t tested it)
- Image views and Gallery views have been reimplemented to be faster for rendering, more performant and use less memory.
- Improved text rendering for the article title & author in the article view. It should now scale gracefully with dynamic type.
- The above change has also been made in the Feed view for individual cells.
- All layout rendering issues in the Article renderer have been resolved. This includes stupid Xcode warnings and the like.
A) Quotes sometimes render with extra height. This is a known issue.
B) Formatted code blocks don’t scroll horizontally in the same line. This is a known issue.
- Opening links externally is now partially implemented. This may still crash in some situations.
- Sharing an article now adds a “space” between the title and URL. This will be in place until Apple resolves the bug and restores the correct behaviour.
I hope you enjoy reading over the weekend. Have a good one. ✌🏼
If you’re familiar with the NATO phonetic alphabet, you’ve already postulated what the post is about. Keeping things in fashion, allow me to burst out of my shell of excitement and say, Project “Yeti” is finally real.
Yes, it has always been real. Real for me. But now, with today’s alpha release, it’s real-er as more people are onboard. So to everyone who is, you probably are already testing things it, playing around with it, tinkering it, breaking it, and what have you. So have a blast using it, just as I have working on it up to this point.
I did manage to sneak in “one-more-thing” into the alpha release which was planned for way later. “Add to Yeti” from the share extension. It isn’t the most perfect implementation, but it is usable. It’ll enable you to get off the ground more quickly.
If you’re looking to import your existing stuff into Yeti, well, sorry. But that didn’t make into this build as I wasn’t able to thoroughly test it. So if you have your OPML file ready, feel free to email it to me as it’ll aid me test the system better.
I’ll spare you with the technical details in this post. A year’s worth of work has finally taken shape of an actual product. I’ll allow myself to enjoy that feeling, while you enjoy the app (or utterly hate it…)
Once again, let’s begin with the abstract confusing title.
But it’s a real one. Here’s some proof.
As you can see, that’s an actual commit. Well, allow me to explain why this exists in the first place.
When you going through a lot of items in a feed, you may find yourself requiring to quickly move between those items or within an item itself. This is particular useful when you’re researching something. The search (which I talk about here) feature ties into this specific bit.
So what is previous next up and down? Think about it in terms of the article. So what you get is:
- Previous article
- Next article
- Beginning of article
- End of article
So now you never have to leave the reading interface and can go for a reading spree if that’s your thing.
Searching for content and matching it in lists in most such apps is trivially done by checking if your search input matches the title of an article or similar. This is great. It has worked for many years. However, the method naively skips out on a lot of information already available to the app. For example
- Author’s name (when there are multiple people authoring on a website)
- Date of publication (and matches to words like “Today”, “Yesterday” and the like)
All of the above may contain information you could be searching for. Being stuck with trying to remember the name of the article you read last Sunday and finding it now is a b****. I’ve been in this position many times myself. Yes, bookmarking can save your bacon. But that method has a big single point of failure: what if you forgot to bookmark it?
An well produced app should save you from this situation. It should save me from this situation. Depending on your current device, you may or not be able to see the tags on this post. I’ve included Levenshtein in there. If you’ve ever heard of Levenshtein distance, you’ll be familiar with how it works. If you haven’t, it’s simply a “score” of how similar or dissimilar two pieces of text are.
Levenshtein distance is also calculated and matched against the title and summary to provide a loosely typed searching experience. So you simply need to know the “general” direction of where you’re going, and not the precise location.
You may think this is a lot for a simple text-based search operation. It isn’t. I wonder why many haven’t already done something like this.
The title is going to sound very weird, especially considering this is the first post. If you’re aware of what Yeti is, the rest of this post is going to make sense. If not, it’s either going to confuse you or you’re going to figure out what this is about.
- Oddly enough, I never thought of async rendering of text. Well, it has been implemented now. It wasn’t too hard. I had to simply move all text rendering to a few concurrent threads and ensure all UI work is still happening on the main thread. This enables the app to render the very top section right away while the rest renders afterwards. This enables the user to get started with the content while the rest renders. This isn’t a big deal on the iPhone X which can render a decent chunk within a millisecond, but this is very useful on something like the iPhone 5C.
- Native code rendering. I was against the idea of using a web view for showing pre-formatted code. So, with the help of highlight.js, I got this working pretty well. It isn’t as fast as I’d like it to be, but this is something I can optimise later.
- Improved margins. I was pretty a*** about the text lining up with the back button on the screen. This is fixed now and no longer drives my OCD up the wall.
- On the server side, I completed work on convertor v3. I know, the product isn’t even v1.0.0 but I already have the convertor at v3. This is critical because convertor v1 was very basic, did not handle a lot of tags and edge cases, v2 handled all supported tags but failed to handle a lot of new edge cases. v3 covers both of those situations. It’s also 30-35% faster than v2, so that’s another thing in it’s favour.