Thunderbolting Your Video Card

Thunderbolting Your Video Card

When I wrote about The Golden Age of x86 Gaming, I implied that, in the future, it might be an interesting, albeit expensive, idea to upgrade your video card via an external Thunderbolt 3 enclosure.

 

I’m here to report that the future is now.

Yes, that’s right, I paid $500 for an external Thunderbolt 3 enclosure to fit a $600 video card, all to enable a plug-in upgrade of a GPU on a Skull Canyon NUC that itself cost around $1000 fully built. I know, it sounds crazy, and … OK fine, I won’t argue with you. It’s crazy.

This matters mostly because of 4k, aka 2160p, aka 3840 × 2160, aka Ultra HD.

4k compared to 1080p

Plain old regular HD, aka 1080p, aka 1920 × 1080, is one quarter the size of 4k, and ¼ the work. By today’s GPU standards HD is pretty much easy mode these days. It’s not even interesting. No offense to console fans, or anything.

Late in 2016, I got a 4k OLED display and it … kind of blew my mind. I have never seen blacks so black, colors so vivid, on a display so thin. It made my previous 2008 era Panasonic plasma set look lame. It’s so good that I’m now a little angry that every display that my eyes touch isn’t OLED already. I even got into nerd fights over it, and to be honest, I’d still throw down for OLED. It is legitimately that good. Come at me, bro.

Don’t believe me? Well, guess which display in the below picture is OLED? Go on, guess:

Guess which screen is OLED?

There’s a reason every site that reviews TVs had to recalibrate their results when they reviewed the 2016 OLED sets.

In my extended review at Reference Home Theater, I call it “the best looking TV I’ve ever reviewed.” But we aren’t alone in loving the E6. Vincent Teoh at HDTVtest writes, “We’re not even going to qualify the following endorsement: if you can afford it, this is the TV to buy.” Rtings.com gave the E6 OLED the highest score of any TV the site has ever tested. Reviewed.com awarded it a 9.9 out of 10, with only the LG G6 OLED (which offers the same image but better styling and sound for $2,000 more) coming out ahead.

But I digress.

Playing games at 1080p in my living room was already possible. But now that I have an incredible 4k display in the living room, it’s a whole other level of difficulty. Not just twice as hard – and remember current consoles barely manage to eke out 1080p at 30fps in most games – but four times as hard. That’s where external GPU power comes in.

The cool technology underpinning all of this is Thunderbolt 3. The thunderbolt cable bundled with the Razer Core is rather … diminutive. There’s a reason for this.

Is there a maximum cable length for Thunderbolt 3 technology?

Thunderbolt 3 passive cables have maximum lengths.

  • 0.5m TB 3 (40Gbps)
  • 1.0m TB 3 (20Gbps)
  • 2.0m TB 3 (20Gbps)

In the future we will offer active cables which will provide 40Gbps of bandwidth at longer lengths.

40Gbps is, for the record, an insane amount of bandwidth. Let’s use our rule of thumb based on ultra common gigabit ethernet, that 1 gigabit = 120 megabytes/second, and we arrive at 4.8 gigabytes/second. Zow.

That’s more than enough bandwidth to run even the highest of high end video cards, but it is not without overhead. There’s a mild performance hit for running the card externally, on the order of 15%. There’s also a further performance hit of 10% if you are in “loopback” mode on a laptop where you don’t have an external display, so the video frames have to be shuttled back from the GPU to the internal laptop display.

This may look like a gamer-only thing, but surprisingly, it isn’t. What you get is the general purpose ability to attach any PCI express card to any computer with a Thunderbolt 3 port and, for the most part, it just works!

Linus breaks it down and answers all your most difficult questions:

Please watch the above video closely if you’re actually interested in this stuff; it is essential. I’ll add some caveats of my own after working with the Razer Core for a while:

  • Make sure the video card you plan to put into the Razer Core is not too tall, or too wide. You can tell if a card is going to be too tall by looking at pictures of the mounting rear bracket. If the card extends significantly above the standard rear mounting bracket, it won’t fit. If the card takes more than 2 slots in width, it also won’t fit, but this is more rare. Depth (length) is rarely an issue.
  • There are four fans in the Razer Core and although it is reasonably quiet, it’s not super silent or anything. You may want to mod the fans. The Razer Core is a remarkably simple device, internally, it’s really just a power supply, some Thunderbolt 3 bridge logic, and a PCI express slot. I agree with Linus that the #1 area Razer could improve in the future, beyond generally getting the price down, is to use fewer and larger fans that run quieter.
  • If you’re putting a heavy hitter GPU in the Razer Core, I’d try to avoid blower style cards (the ones that exhaust heat from the rear) in favor of those that cool with large fans blowing down and around the card. Dissipating 150w+ is no mean feat and you’ll definitely need to keep the enclosure in open air … and of course within 0.5 meters of the computer it’s connected to.
  • There is no visible external power switch on the Razer Core. It doesn’t power on until you connect a TB3 cable to it. I was totally not expecting that. But once connected, it powers up and the Windows 10 Thunderbolt 3 drivers kick in and ask you to authorize the device, which I did (always authorize). Then it spun a bit, detected the new GPU, and suddenly I had multiple graphics card active on the same computer. I also installed the latest Nvidia drivers just to make sure everything was ship shape.
  • It’s kinda … weird having multiple GPUs simultaneously active. I wanted to make the Razer Core display the only display, but you can’t really turn off the built in GPU – you can select “only use display 2”, that’s all. I got into several weird states where windows were opening on the other display and I had to mess around a fair bit to get things locked down to just one display. You may want to consider whether you have both “displays” connected for troubleshooting, or not.

And then, there I am, playing Lego Marvel in splitscreen co-op at glorious 3840 × 2160 UltraHD resolution on an amazing OLED display with my son. It is incredible.

Beyond the technical “because I could”, I am wildly optimistic about the future of external Thunderbolt 3 expansion boxes, and here’s why:

  • The main expense and bottleneck in any stonking gaming rig is, by far, the GPU. It’s also the item you are most likely to need to replace a year or two from now.
  • The CPU and memory speeds available today are so comically fast that any device with a low-end i3-7100 for $120 will make zero difference in real world gaming at 1080p or higher … if you’re OK with 30fps minimum. If you bump up to $200, you can get a quad-core i5-7500 that guarantees you 60fps minimum everywhere.
  • If you prefer a small system or a laptop, an external GPU makes it so much more flexible. Because CPU and memory speeds are already so fast, 99.9% of the time your bottleneck is the GPU, and almost any small device you can buy with a Thunderbolt 3 port can now magically transform into a potent gaming rig with a single plug. Thunderbolt 3 may be a bit cutting edge today, but more and more devices are shipping with Thunderbolt 3. Within a few years, I predict TB3 ports will be as common as USB3 ports.
  • A general purpose external PCI express enclosure will be usable for a very long time. My last seven video card upgrades were plug and play PCI Express cards that would have worked fine in any computer I’ve built in the last ten years.
  • External GPUs are not meaningfully bottlenecked by Thunderbolt 3 bandwidth; the impact is 15% to 25%, and perhaps even less over time as drivers and implementations mature. While Thunderbolt 3 has “only” PCI Express x4 bandwidth, many benchmarkers have noted that GPUs moving from PCI Express x16 to x8 has almost no effect on performance. And there’s always Thunderbolt 4 on the horizon.

The future, as they say, is already here – it’s just not evenly distributed.

I am painfully aware that costs need to come down. Way, way down. The $499 Razer Core is well made, on the vanguard of what’s possible, a harbinger of the future, and fantastically enough, it does even more than what it says on the tin. But it’s not exactly affordable.

I would absolutely love to see a modest, dedicated $200 external Thunderbolt 3 box that included an inexpensive current-gen GPU. This would clobber any onboard GPU on the planet. Let’s compare my Skull Canyon NUC, which has Intel’s fastest ever, PS4 class embedded GPU, with the modest $150 GeForce GTX 1050 Ti:

1920 × 1080 high detail
Bioshock Infinite 15 → 79 fps
Rise of the Tomb Raider 12 → 49 fps
Overwatch 43 → 114 fps

As predicted, that’s a 3x-5x stompdown. Mac users lamenting their general lack of upgradeability, hear me: this sort of box is exactly what you want and need. Imagine if Apple was to embrace upgrading their laptops and all-in-one systems via Thunderbolt 3.

I know, I know. It’s a stretch. But a man can dream … of externally upgradeable GPUs. That are too expensive, sure, but they are here, right now, today. They’ll only get cheaper over time.

[advertisement] Find a better job the Stack Overflow way – what you need when you need it, no spam, and no scams.
The Gamma dataviz package now available!

The Gamma dataviz package now available!

There were a lot of rumors recently about the death of facts
and even the death of statistics.
I believe the core of the problem is that working with facts is quite tedious and the results are
often not particularly exciting. Social media made it extremely easy to share your own opinions
in an engaging way, but what we are missing is a similarly easy and engaging way to share facts
backed by data.



This is, in essence, the motivation for The Gamma project that I’ve been working on
recently
. After several experiments, including the
visualization of Olympic medalists,
I’m now happy to share the first reusable component based on the work that you can try and use
in your data visualization projects. If you want to get started:

The package implements a simple scripting language that anyone can use for writing simple data
aggregation and data exploration scripts. The tooling for the scripting language makes it super
easy to create and modify existing data analyses. Editor auto-complete offers all available
operations and a spreadsheet-inspired editor lets you create scripts without writing code – yet,
you still get a transparent and reproducible script as the result.

The Raspberry Pi Has Revolutionized Emulation

The Raspberry Pi Has Revolutionized Emulation

Every geek goes through a phase where they discover emulation. It’s practically a rite of passage.

I think I spent most of my childhood – and a large part of my life as a young adult – desperately wishing I was in a video game arcade. When I finally obtained my driver’s license, my first thought wasn’t about the girls I would take on dates, or the road trips I’d take with my friends. Sadly, no. I was thrilled that I could drive myself to the arcade any time I wanted.

My two arcade emulator builds in 2005 satisfied my itch thoroughly. I recently took my son Henry to the California Extreme expo, which features almost every significant pinball and arcade game ever made, live and in person and real. He enjoyed it so much that I found myself again yearning to share that part of our history with my kids – in a suitably emulated, arcade form factor.

Down, down the rabbit hole I went again:

 

I discovered that emulation builds are so much cheaper and easier now than they were when I last attempted this a decade ago. Here’s why:

  1. The ascendance of Raspberry Pi has single-handedly revolutionized the emulation scene. The Pi is now on version 3, which adds critical WiFi and Bluetooth functionality on top of additional speed. It’s fast enough to emulate N64 and PSX and Dreamcast reasonably, all for a whopping $35. Just download the RetroPie bootable OS on a $10 32GB SD card, slot it into your Pi, and … well, basically you’re done. The distribution comes with some free games on it. Add additional ROMs and game images to taste.
  2. Chinese all-in-one JAMMA cards are available everywhere for about $90. Pandora’s Box is one “brand”. These things are are an entire 60-in-1 to 600-in-1 arcade on a board, with an ARM CPU and built-in ROMs and everything … probably completely illegal and unlicensed, of course. You could buy some old broken down husk of an arcade game cabinet, anything at all as long as it’s a JAMMA compatible arcade game – a standard introduced in 1985 – with working monitor and controls. Plug this replacement JAMMA box in, and bam: you now have your own virtual arcade. Or you could build or buy a new JAMMA compatible cabinet; there are hundreds out there to choose from.
  3. Cheap, quality arcade size IPS LCDs of 18-23″. The CRTs I used in 2005 may have been truer to old arcade games, but they were a giant pain to work with. They’re enormous, heavy, and require a lot of power. Viewing angle and speed of refresh are rather critical for arcade machines, and both are largely solved problems for LCDs at this point, which are light, easy to work with, and sip power for $100 or less.

Add all that up – it’s not like the price of MDF or arcade buttons and joysticks has changed substantially in the last decade – and what we have today is a console and arcade emulation wonderland! If you’d like to go down this rabbit hole with me, bear in mind that I’ve just started, but I do have some specific recommendations.

Get a Raspberry Pi starter kit. I recommend this particular starter kit, which includes the essentials: a clear case, heatsinks – you definitely want small heatsinks on your 3, as it dissipate almost 4 watts under full load – and a suitable power adapter. That’s $50.

Get a quality SD card. The primary “drive” on your Pi will be the SD card, so make it a quality one. Based on these excellent benchmarks, I recommend the Sandisk Extreme 32GB or Samsung Evo+ 32GB models for best price to peformance ratio. That’ll be $15, tops.

Download and install the bootable RetroPie image on your SD card. It’s amazing how far this project has come since 2013, it is now about as close to plug and play as it gets for free, open source software. The install is, dare I say … “easy”?

Decide how much you want to build. At this point you have a fully functioning emulation brain for well under $100 which is capable of playing literally every significant console and arcade game created prior to 1997. Your 1985 self is probably drunk with power. It is kinda awesome. Stop doing the Safety Dance for a moment and ask yourself these questions:

  • What controls do you plan to plug in via the USB ports? This will depend heavily on which games you want to play. Beyond the absolute basics of joystick and two buttons, there are Nintendo 64 games (think analog stick(s) required), driving games, spinner and trackball games, multiplayer games, yoke control games (think Star Wars), virtual gun games, and so on.
  • What display to you plan to plug in via the HDMI port? You could go with a tiny screen and build a handheld emulator, the Pi is certainly small enough. Or you could have no display at all, and jack in via HDMI to any nearby display for whatever gaming jamboree might befall you and your friends. I will say that, for whatever size you build, more display is better. Absolutely go as big as you can in the allowed form factor, though the Pi won’t effectively use much more than a 1080p display maximum.
  • How much space do you want to dedicate to the box? Will it be portable? You could go anywhere from ultra-minimalist – a control box you can plug into any HDMI screen with a wireless controller – to a giant 40″ widescreen stand up arcade machine with room for four players.
  • What’s your budget? We’ve only spent under $100 at this point, and great screens and new controllers aren’t a whole lot more, but sometimes you want to build from spare parts you have lying around, if you can.
  • Do you have the time and inclination to build this from parts? Or do you prefer to buy it pre-built?

These are all your calls to make. You can get some ideas from the pictures I posted at the top of this blog post, or search the web for “Raspberry Pi Arcade” for lots of other ideas.

As a reasonable all-purpose starting point, I recommend the Build-Your-Own-Arcade kits from Retro Built Games. From $330 for full kit, to $90 for just the wood case.

You could also buy the arcade controls alone for $75, and build out (or buy) a case to put them in.

My “mainstream” recommendation is a bartop arcade. It uses a common LCD panel size in the typical horizontal orientation, it’s reasonably space efficient and somewhat portable, while still being comfortably large enough for a nice big screen with large speakers gameplay experience, and it supports two players if that’s what you want. That’ll be about $100 to $300 depending on options.

I remember spending well over $1,500 to build my old arcade cabinets. I’m excited that it’s no longer necessary to invest that much time, effort or money to successfully revisit our arcade past.

Thanks largely to the Raspberry Pi 3 and the RetroPie project, this is now a simple Maker project you can (and should!) take on in a weekend with a friend or family. For a budget of $100 to $300 – maybe $500 if you want to get extra fancy – you can have a pretty great classic arcade and classic console emulation experience. That’s way better than I was doing in 2005, even adjusting for inflation.

[advertisement] At Stack Overflow, we put developers first. We already help you find answers to your tough coding questions; now let us help you find your next job.
Upcoming F# events – learn Suave, FsLab & more!

Upcoming F# events – learn Suave, FsLab & more!

Some people in the F# community have reputation for traveling too much. I do not know how that is
possible, but as it happens, I will be visiting a couple of places in June and doing a number of
talks, workshops and courses. So, if you are thinking about getting into F#, web development with
F# using the amazing Suave library, playing with the new trendy F# to JavaScript
compiler called Fable, or learning about the recent
features in FsLab and Ionide, then
continue reading!

The map includes all my travels, but not all of the pins are for F# events. I’m visiting Prague
just to see my family (even though there is a new awesome F# meetup there)
and my stop in Paris is attending Symposium for the History and Philosophy of
Programming
(although we might still do something with the local F#
group too).

Create forms with Websharper.Forms

Create forms with Websharper.Forms

Create forms with Websharper.Forms

In my previous posts, I have covered multiple aspects on how WebSharper can be used to make nice webapps by using animations, by tapping into external JS libaries or by using the built in router in UI.Next to make SPA. Today I would like to cover another aspect which is essential for making useful webapps – Forms.

Most of the site we visit on a daily basis have forms. WebSharper.Forms is a library fully integrated with the reactive model of UI.Next which brings composition of forms to the next level by making composition of forms, handling validation, handling submit and displaying error an easy task.

WebSharper.Forms is available in alpha at the moment on nuget – https://www.nuget.org/packages/WebSharper.Forms.

What is needed to build a form

The form that we will build in this tutorial will handle:

  • Inline validation
  • Submitting data
  • Async operation
  • Error handling from async operation

preview

This are the requirements I gathered during my last project. I had to deal with many forms but overall, they all required this four points and nothing more.

Composing with WebSharper.Forms

All the forms that I built so far follow the same order of instructions:

  1. Calls Form.Return,
  2. Follows a bunch of apply (<*>) of Form.Yield,
  3. Pipes some async function Form.MapAsync which are to be executed on submit,
  4. Pipes Form.MapToResult to handle the result of the async call (4. and 5. can be combined with MapToAsyncResult),
  5. Pipes (|>) Form.WithSubmit to tell it that I want to submit something after a button submit click,
  6. Pipes Form.Render which provides a way to transform the Form to a Doc which we can then embed in the page.

As an example, here is the full implementation of the form that we will use:

Form.Return (fun firstname lastname age -> firstname + " " + lastname, age)
<*> (Form.Yield "" |> Validation.IsNotEmpty "First name is required.")
<*> (Form.Yield "" |> Validation.IsNotEmpty "Last name is required.")
<*> (Form.Yield 18)
|> Form.MapAsync(fun (displayName, number) -> sendToBackend displayName number)
|> Form.MapToResult (fun res ->
match res with
| Success s -> Success s
| Result.Failure _ -> Result.Failure [ ErrorMessage.Create(customErrorId, "Backend failure") ])
|> Form.WithSubmit
|> Form.Render(fun name lastname age submit ->
form [ fieldset [ div [ Doc.Input [] name ]
Doc.ShowErrorInline submit.View name
div [ Doc.Input [] lastname ]
Doc.ShowErrorInline submit.View lastname
div [ Doc.IntInputUnchecked [] age ]
Doc.Button "Send" [ attr.``type`` "submit" ] submit.Trigger
Doc.ShowCustomErrors submit.View ] ])

The first part of the form composed by the set of Return <*> yield <*> yield <*> yield is very powerful.

If you want to read more about this type of composition, you can read this blog post from Tomas Petricek http://tomasp.net/blog/applicative-functors.aspx/.

Basically, it allows us to work directly with the input validated data in the function given in Form.Return. Every interaction is done by composing Form<_> and we compose Form<_> elements. Since validation on the inputs is done at the Form.Yield level, the values given to the function in Form.Return are always valid and we can safely work with the values.

If our input is a string input, Form.Yield "" will return a Form<string,_> and within the function in Form.Return we can directly work with the string given by the Form.Yield.

Now it is interesting to look at the type to see how the composition works, the first Form.Return has the following type:

Form<'T, 'D -> 'D>
Form<(string -> string -> int -> string * int), ('a -> 'a)>

And Form.Yield "" has the following type:

Form<string, ((Var<string> -> 'a) -> 'a)>

Applying Yield to Return (putting them together with <*>) will combine the types and returns:

Form<(string -> int -> string * int), ((Var<string> -> 'a) -> 'a)>

We basically removed one of the string params from 'T and added a Var<string> param in 'D. By continuing the same way, Form.Yield "" and Form.Yield 0, we end up with:

Form<(string * int), ((Var<string> -> Var<string> -> Var<int> -> 'a) -> 'a)>

And it turns out that string * int is our inputs combined in a tuple that we receive as argu
ment in Form.MapAsync
and Var<string> -> Var<string> -> Var<int> is what we receive in Form.Render to render our form. Wonderful, it seems to add up together!

Inline validation

Inline validation refers to the validation of the fields before being sent. It will help to prevent submitting the form for nothing. I might be stating the obvious but the server should still perform a validation on the input sent.

Validation is handled during Form.Yield piped to Validation.XX.

<*> (Form.Yield "" |> Validation.IsNotEmpty "Last name is required.")

What happens when data is invalid?

That’s the amazing part, when data is invalid, the function in the Form.Return isn’t executed. Instead a Failure is passed through and can be caught in a Form.MapResult or directly in the Form.Render to be display the error. That is why we can safely assume that all the arguments in the Form.Return function are valid arguments and we can perform the action we want.

Mapping async function and result

Most of the time when sending a form we want to perfom a network request. Those requests are usually async request. Form.MapAsync allows us to specify an async function to be executed when the form is submitted. This allows us to handle the result in Form.MapToResult without worrying about the async nature of the call. Form.MapToResult is piped to perform an action when the result of the async function is returned.

|> Form.MapAsync(fun (displayName, number) -> sendToBackend displayName number)
|> Form.MapToResult (fun res ->
match res with
| Success s -> Success s
| Result.Failure _ -> Result.Failure [ ErrorMessage.Create(customErrorId, "Backend failure") ])

Submitting data

When we want to use a submit button and we want the form to be triggered when that submit button is clicked, we need to pipe a Form.WithSubmit function. This adds a special type at the end of the arguments of 'D. The type becomes:

Form<(string * int), ((Var<string> -> Var<string> -> Var<int> -> Submitter<Result<string * int>> -> 'a) -> 'a)>

The Submitter type exposes a Trigger function which allows the form to be triggered and a View which observe the Result<'T> of the form. A Submitter is just a type hiding a View.SnapshotOn where Trigger triggers the snapshot of the current value of the form. If you are interested, you can find its definition here.

The View can be used to display inline errors and errors returned from the async call.

I pipe the submit after the Form.Map otherwise you need to use Form.TransmitView to observe the error which occurs during the mapping. Also if you pipe the submit after the Form.Map be sure to add at least one validation otherwise the Form.Map will be executed one time on startup.

|> Form.WithSubmit

Render

Finally we render the form and transform it to a Doc. As we seen earlier, the arguments of the Form.Render function are the Var<_>(s) plus a Submitter. We basically construct the form and call .Trigger on click.

|> Form.Render(fun name lastname age submit ->
form [ fieldset [ div [ Doc.Input [] name ]
Doc.ShowErrorInline submit.View name
div [ Doc.Input [] lastname ]
Doc.ShowErrorInline submit.View lastname
div [ Doc.IntInputUnchecked [] age ]
Doc.Button "Send" [ attr.``type`` "submit" ] submit.Trigger
Doc.ShowCustomErrors submit.View ] ])

Some helpers

The Render call contains some extra functions, Doc.ShowErrorInline and Doc.ShowCustomErrors. These functions are extensions that I have created to simplify the display of errors. Here’s the implementation:

let customErrorId = 5000

type Doc with
static member ShowErrorInline view (rv: Var<_>)=
View.Through(view, rv)
|> View.Map (function Success _ -> Doc.Empty | Failure errs -> errs |> List.map (fun err -> p [ text err.Text ] :> Doc) |> Doc.Concat)
|> Doc.EmbedView
static member ShowCustomErrors view =
Doc.ShowErrors view
(fun errs -> errs
|> List.filter (fun err -> err.Id = customErrorId)
|> List.map (fun err -> p [ text err.Text ] :> Doc)
|> Doc.Concat)

View.Through will filter the errors which are only related to the Var<_> given. I am using a cutomErrorId to filter the errors that I created myself.

The full code source can be found here .

Conclusion

At first WebSharper.Forms looks intimidating, especially when you are not familiar with the apply notation. But the concepts used in WebSharper.Forms is very powerful as it allows us to hide behind the Form<_> type and manipulate safe values to perform our actions. The only validation needed is the validation during the Yield stage. After getting used to it, I found the use of WebSharper.Forms very beneficial as it allowed me to rapidly build form flows and even after few weeks, I can just have a glance at the code and directly understand what it is doing (and we all know that it does not happen with every piece of code). Like always, if you have any comments, don’t hesitate to hit me on Twitter @Kimserey_Lam or leave a comment below. Thanks for reading!