Friday, December 23, 2016

Medical Ultrasound Systems Pt II, Where I Stand by My Statements That They're Actually Inexpensive

So my recent post on ultrasound systems costs got a lot of attention, more than any article in some time, mostly due I think to a thread in HackerNews dedicated to it. My original post had some good questions pop up in the comments, and same in the HN thread, along with some replies that clearly had particular assumptions about the medical ultrasound industry. I'll try to address them all in this post.

Now, for those of you who are genuinely interested to get answers, I want to provide them as best I can, and any snark in my post here is not aimed at you. For those who want to sit on the sidelines and snipe, the snark most definitely is. Each of those groups, please read what I write with that in mind.

First the more technical side of things. (When I reference a conference, I'm meaning this one the IEEE UFFC Society International Ultrasound Symposium. There are others, but this is a great example.)

1) "What about more electronics in the transducer, and GPUs for beamforming?"
Good question, and transducers are already headed in that direction. After Philips introduced the first real 2D array in around 2003 that was enabled by sub array beamforming electronics in the transducer, there have been advances in that area. However it's important to understand that development of electronics (usually an ASIC) dedicated to a single transducer application is a large investment in money, manpower, and time, then there's the integration of that in the acoustic stack, and making the whole thing work together. It typically takes a substantial team several years to develop that product, and while advances in processes available make this far easier in 2016 than it was in 2003, it's still a lot of work to do. Each of the large companies has a small number of 2D arrays available, but they have to offer substantial benefits over their 1D counterparts as, no surprise, they are a lot more expensive.

Further, make an array 2D instead of 1D and now you've got new data processing challenges - volumes instead of planes, thousands of individual signals instead of ~200. It's all the more computation, so even though the compute available today is increased over the past, the demands are growing too. At some point the computation available economically will exceed the need, but we're not there yet.

But what about the regular 1D arrays? Well, yes there can be more electronics put into there, but remember there are 10 to 20 transducer types per system (cardiac, abdominal, vascular, obstetrics etc each with their own needs), and what makes more sense - a single system with all the common hardware for sampling and beamforming that serves multiple transducers, or all the electronics replicated in each transducer and a simpler system? Economically, right now it makes sense to have all common components in the system, but if electronics get much cheaper/better, then that equation will change and it's something under observation all the time.

If you look here, you'll see the specs for Verasonics open platform which is a nice hardware package for someone to learn, test, and develop ultrasound on, though commercial premium ultrasound systems are often specced somewhat higher. Sampling is up to 62.5MHz, 14 bit, 256 channels - that's up to 224 Gb/s, or around an entire Blu-ray DVD per second. Thunderbolt will get you to 40 Gbps, so place that all in context and realise that as good as modern electronics are, the demands from high end ultrasound are still beyond it. That will change in time, but again, we're not there yet.

Also note that the voltage that system supplies is up to 190V p-p, which means you can't use the smallest process nodes and get a lot of electronics on each wafer, you have to stick with a process node capable of handling that, so larger electronics and higher cost - and that's not likely to change anytime soon, the fundamental physics limits performance per volt (again, at least for now until better materials come along). The last several years have seen an improvement with the advent of single crystal piezoelectrics, but right now there's nothing on the horizon giving such a leap again in sight.

Then there's heat. Electronics generate heat, it's just in their nature. A few watts in something as small as a handheld transducer can rise in temperature very quickly, and either burn the patient or the sonographer. There are stringent FDA rules as to how hot a transducer can get, and performance is always limited to make sure that never happens - the transducer basically performs worse than it could in order to be safe. If the electronics for each channel produces 50mW then on a 200 channel probe that's 10W, and will be too much - but if the electronics are 5mW it's 1 Watt total and now gets more interesting. If power consumption could be so low, then it's more of a size/economics argument, not practicality.

Now beamforming. To begin with, for the technically minded among you, here's a great presentation covering that topic in way more detail than most need. It's from 2005 so a little outdated on some specs, but the basics are still the same. For everyone else, beamforming is taking the raw data and creating an image from it. This involves taking that large flow of data (that 224Gbps), performing a ton of maths operations on it depending on what the imaging mode is, and displaying it, basically a lot of signal processing. The presentation ends with a summary of trends "Analog electronics into probe, digital electronics into software" and that is exactly what is happening, with GPUs now powerful enough to begin to take over from specialised hardware beamformers in some cases, and is likely to increase in speed over the next few years. It will take some time before you see it in the clinic as systems tend to last a decade or more, but it's coming.

So as far as electronics are concerned, there is progress, it is happening, but some of the intense demands of ultrasound mean that the electronics isn't quite there yet, or is only just getting there, and at the same time demands are growing as 2D devices become more prevalent. I expect in 20 years that we'll be looking at a different ecosystem for ultrasound, as cost and performance of electronics shifts workloads between system and transducer.

2) "What about micromachined devices or 3D printing of them?"
Another great question, and something that's been investigated in ultrasound over the last couple of decades. MEMS have been the subject of a lot of funding by both companies and industry for over 20 years for ultrasound. For example, in the early 1990's cMUTs (Capacitive Micromachined Ultrasound Transducers) were hailed as the next great thing in ultrasound, and today in 2016, outside certain specific applications, we're only just starting to see the first commercial devices. That's not due to a lack of effort on the industry's part, all the major players have put major investments into it, but it hasn't quite panned out. There have been issues, many of which have been dealt with, but overall at this time they simply can't outperform piezoelectrics and standard manufacturing in quality and price. There's still work to be done in them, and if they can be made a little better, a bit more consistent, and a bit lower cost, then they will grow in a number of areas, but they need to reach that level of performance that makes them viable. At that point then cost can come down as demand grows, and that virtuous cycle will push more lower cost applications out there. Check out the conference I noted above, there were multiple sessions dedicated to this topic there, and it's got a lot of people working in it. Foundries and semiconductor companies would love to have another high volume application for their fabs, but the right mix of performance, cost, and demand aren't there yet.

pMUTs (piezoelectric Micromachined Ultrasonic Transducers) are being looked at but have some additional difficulties on top of cMUTs. Piezo materials tend to be lead based for good performance, and people don't tend to like lead in their semiconductor fabs, essentially it is often 'not process friendly'. The materials that are, such as ZnO and AlN, are much lower performing so it's limited to applications like FBARs (filters in your phones). There's some promise with scandium doped AlN for better performance, and fabrication methods that allow for better performing piezos, and it's a field to watch but there are still issues. Again, the conference I mention above had a special session on exactly this topic with invited speakers, and was a big draw. Smart experienced people in this field are interested and it will grow.

And 3D printing? It's tough to print some of the active materials and other specialised components of a transducer, but again it's being looked at.  GE, among others, is putting huge company efforts into this, and they and others have given presentations on this effort (again, that conference mentioned - it's almost as if smart people in the industry are thinking about this kind of stuff! :) ) So again, early days, but advanced manufacturing is coming, and it will help with performance, reliability, and prices.

3) "I can buy off the shelf parts for $x, why does the system cost more than $x?"
Quite simply because it takes a lot of effort and manpower to put together a reliable, robust, validated platform upon which people's medical decisions can be based. This would be the case with or without regulation, any product takes this amount of work. If you build something poor quality, you get one sale and no repeat business, and word travels fast - in a competitive world like ultrasound, you lose your name quickly and you're done. Each transducer has to support multiple imaging modes - b-mode, harmonic, doppler etc - and each takes time to program and validate. Then you have to support it, and keep your customer happy, all while keeping your staff paid well enough to not jump ship to the latest social app, and be building the next generation of improved systems. Basically, standard business issues and costs that face any long term enterprise. Oh, and profit, that helps to keep companies going, products being made, and new advances worth funding.

In summary - this stuff is coming, but it's not as easy as you might think, and it's not a microphone on a smartphone that can fail or be disposed of in a couple of years.

Want to be a part of it and learn more? Please do, our industry is always looking for talented people to help make ultrasound better. Attend conferences, take it as a postgraduate course of study, join an ultrasound company or start one. Want to really get involved? Message me, I'm well connected in the industry and will put you in touch with anyone I can to help.

Now the more business side of things - and again, remember the snark is not aimed at those with genuine questions and interest:

4) "You didn't give detailed costs of all the components to prove it's priced low"
First of all, doing so would lead to an exceptionally bland article reading more like a parts list, where I wanted to give more of an idea of what is involved in building a system and that it's not as simple as you would think. The original piece in Medium was based on a number of statements about the simplicity of ultrasound and I wanted to make the point it's a difficult, multi-disciplinary task with a lot of trade-offs. To someone versed in the field, it basically read as "I can build a soapbox derby car for $100, if I stick a motor in it I have a car! Why do these car companies charge $50,000 for one of their cars!?!" (I exaggerate, but not by much.)

Secondly, I actually have to be careful about stating specific numbers, both in pricing and capabilities. I've done work for a number of ultrasound manufacturers, and I have to be sure I do not release any proprietary information, so I tend to err on the side of caution here and make sure to be certain that everything I talk about is already public domain. I'm happy when people not encumbered by such restrictions pitch in.

Lastly, the market is highly competitive, and the fact that it's not priced lower is indicative that something is both worth paying for and priced correctly. If you think the market isn't competitive, I'm not sure what I can say to convince you otherwise, but this next part will try.

5) "There's a conspiracy among manufacturers to keep prices high"
I have to say, hearing this surprised me. I've been in the industry for over 20 years and never once even seen the hint that this is happening in ultrasound, with massive evidence pointing instead to intense competition. It's a multi-billion dollar market (est ~$6 billion), with several large international players (this link here has some of the larger, this link here shows dozens of smaller ones, this market research report mentions 25 companies), and regulated in a way that it's hammered into everyone to be sure there's no price fixing, collusion/cartels, or other anti-competitive behaviour. Companies have moved up and down the rankings significantly over the last decades, each is always looking (ethically and legally) for a technical or price advantage over its competitors. Medical ultrasound is also a heavily regulated market, and multiple countries (esp the EU and US) will come down hard on a company in this space participating in anti-competitive behaviour.

In every company I've worked I've seen strong pressure to simultaneously raise quality and reliability while lowering costs, and if you look at systems on the market today compared to the past then there have been major improvements at the top end where prices have remained fairly constant, and this has had the knock on effect of allowing the introduction of lower cost and capability systems further down the chain that exceed the capability of yesterday's premium systems.

If someone could start a company that produced ultrasound systems at quality and consistency, with volume, but lower cost, I guarantee you they would be bought by one of the bigger players to incorporate and take advantage of that technology. So if you feel there is a conspiracy, and that ultrasound systems are in fact easy to make, then feel free to start that company yourself and take advantage of the free money everyone else is passing up. Or, even better, I'll help you - quite seriously, email me, tell me what we're doing wrong, and I'll either find a way to hire you, get you a job in the industry, or let's start that company and make our millions. Seriously, mail me and let's do that, or if you're certain there's a conspiracy then I can provide you with the contact details for various regulatory agencies in various countries who would love to see your evidence they can prosecute with.

To make it clear - few industries actually operate in a market that has such intense competition, among many large players, each trying to provide the customer with the best price and quality mix to make the sales, and leapfrog their competition. This is not an "Intel own 99% of the server market and have little competition to drive prices down", it's more like competition in the car industry where there are many players competing.

6) "Engineers don't know what they are doing and are passing up really obvious and simple things that will make the products much faster, better, and cheaper."
This industry is made up of thousands of dedicated engineers, researchers, and support staff who are smart, highly educated, very experienced, and highly capable. If they wanted to, many could move to doing things like apps, social media, or whatever the fad of the moment is and make more money with less stress. But they don't, because they love what they do, they live to make technology better, or faster, or cheaper, and because they know that the work they do in ultrasound imaging helps people and makes a difference. That they'd willingly pass up technology advances and better methods just goes against character, and given the attitude of these people and the competition in the industry, if management decided for them not to pursue such benefits, they'd leave for another company or start their own. 

There are multiple professional organisations that are dedicated solely to ultrasound, and heavily to the medical side of that. IEEE UFFC is one such organisation, I'm heavily involved in it, and there are several others. The IEEE is non-profit, solely concentrating on technology, and does not support any single company or commercial interest. They produce peer reviewed journals on state of the art in transducers, materials, electronics, systems, and imaging, and every year have a conference where a couple of thousand people attend and present, discuss, and learn about the best practices and technologies. This year I watched presentations on 3D printing of transducers, new materials, rapid imaging techniques made possible by GPUs, micromachined devices, and advanced electronics for transducers (This page has a list of talks and the abstracts if you want to see what was covered). These are things that companies are spending plenty of resources researching, universities have students doing Ph.D's on them, and over time will make their way into products as the technology matures and becomes reliable and cost effective.

If you feel that, without experience in this industry, you are already superior to those who have worked in it for years, then send me your resume. I know companies that will hire someone so skilled to give them an advantage over the competition, or will hire you as a consultant. Or I'll help you get an abstract accepted to the IEEE UFFC conference so you can get your knowledge out there. I'll work with you to get a grant from the NIH or NSF to develop your technology and patent it, or just put it out there online for the world to see - do it for the benefit of the world. Or admit it's armchair quarterbacking. Plenty of options.

Summary - The field is made up of smart, dedicated, and committed people who strive to make quality, well priced products at a variety of price points that make technical and economic sense. Please don't make statements that are predicated on them being stupid, ignorant, or greedy without some evidence to back it up.

7) "All the costs are regulatory, without the FDA we'd have safe machines at a fraction of the cost!"
This is going to be tough to disprove without giving internal costs from various companies, and I can't, and won't do that. I know that regulatory is certainly an aspect of it, but doing a headcount it's not in the top few for costs. There are engineering tests and documentation burdens, but they're really not far beyond what any engineering team concerned with good record keeping and producing a safe device would do. And importantly, having clear regulations allows all participants to compete on a level playing field, knowing that everyone is playing by the same rules.

Yes, you can buy a veterinary, unregulated ultrasound machine from AliBaba. Good luck with it giving you a quality or even useful image, not injuring you, being reliable, or getting any support on it. Or getting it to do a fraction of the things a premium ultrasound system will.

Once again, there's a reason that ultrasound is the most widely used medical imaging modality, and is incredibly safe, and part of that is the FDA and similar regulation.

8) "Are you stupid you can't make a transducer without sharp edges!"
This is to reply to one specific comment. Ultrasound is unlike MR and CT in that it is both operator and patient dependent - each image can be different and some skill is required. The acoustic window in which clinically useful images can be gained sometimes involves placing the transducer in a location, and applying sufficient force, that it can be uncomfortable for the patient. If the patient starts to move because it's uncomfortable, it makes getting a good image harder. Some transducers have a tradeoff of minimised size for access to certain locations, but maximised acoustic area for good image, which can lead to corners that are not very smoothly rounded, and while not 'sharp' as in 'cuts the patient', might lead to more discomfort than necessary if not designed correctly. Oh, and yes building a 200 wire cable with minimal crosstalk is easy, doing it and making it flexible so a sonographer can use it (like I said in the original post), and at a reasonable cost isn't. Congratulations to the person that asked that question and showed their genius in how dumb we ultrasound people are - you actually managed to annoy me with those comments! You, in particular, are an author I aimed the "Since you're so smart, why don't you clean up in our industry of charlatans and idiots?" snark at.

9) "Phones are cheap and have a ton of technology in them, why aren't you that cheap?"
>Several hundred million phones are sold every year, probably 4 orders of magnitude more than ultrasound systems. On average they last about 18 months to 2 years, compared to a decade or more for an ultrasound system. No-one's life depends on them. One of the simplest things here is there just isn't the economy of scale for ultrasound to hit those price points. Perhaps it's a chicken-and-egg thing, that the "killer app" for ultrasound isn't here because it's too expensive, but if the demand is there then the tech will come. Got that application? As some have noted, there are rumours that's what Butterfly Labs are working on, but despite being well connected in the industry, I've heard nothing on what they are really doing after several years of effort. I hope they produce something astounding, but until then, no smartphone market economies of scale for ultrasound. 

OK, I'm done for now and leave you with this, once again - If you can make a difference in this industry in price, performance, reliability, or application then get in touch, there are companies and universities that want good people to work on this. Don't armchair quarterback or cite imaginary conspiracies, get involved. 

Merry Christmas everyone.


  1. >Sampling is up to 62.5MHz, 14 bit, 256 channels - that's up to 224 Gb/s, or around 6 entire Blu-ray DVDs per second.

    224 Gbs, but only 28 GB/s, so a bit more than 1 layer blue-ray or a bit over half of 2 layer blue-ray. Too high to transmit over an ethernet cable even over short distances, and too high to write to an SSD, and pushing the limits of parallel DDR3, but not really challenging per se. Certainly not beyond the capabilities of electronics.

    It's also DEFINITELY waaaaaaay within the ability to do computations on cheaply. A floating point operation is on 32 bits, but say you just do it on the 14 bits, so there are 16 billion floating points per second of data. A GTX 760 costs a couple hundred dollars, and can do 423 operations PER FLOATING POINT per second. That's certainly enough to do whatever beamforming or analysis is needed.

    1. Retraction: I misread the TFLOPs on the 760, it can only do 141 operations per floating point per second. I also don't mean to seem obnoxious by using a single graphics card like some kind of magic bullet that can render the computation challenges impotent. It's certainly more complicated than that. However if 16 billion floating points is at the high end of what an ultrasound system produces, and that data flow is far below what a consumer product can handle, then the other details stop really mattering. Even if it costs five grand (30x the cost of the processor itself) to set up that card in a system that can handle all that data, that cost is tiny compared to a $150k pricetag. It just isn't that complex.

      Even embedded processors can reach hundreds of GFLOPs:

      If 10% of the cost is parts and 90% is engineering, the cost should still only be in the thousands of dollars, even for nice systems. Low-end systems should be waaaaay below 30k.

      >To someone versed in the field, it basically read as "I can build a soapbox derby car for $100, if I stick a motor in it I have a car! Why do these car companies charge $50,000 for one of their cars!?!" (I exaggerate, but not by much.)

      But that's accurate- a brand new current year model fit costs 15 grand. Even if you made it out of welded together steel, a lawnmower engine, and trash, you're still gonna be within an order of magnitude of that. The same is very much not true of ultrasound machines.

      >This is not an "Intel own 99% of the server market and have little competition to drive prices down", it's more like competition in the car industry where there are many players competing.

      And before Tesla, everybody knew electric cars were far too expensive and could not be built economically. It's the same situation- nobody builds a cheap ultrasound because nobody has an incentive to try. That does not mean it isn't possible.

      I also think its definitely NOT regulations that make ultrasound expensive. If it was, companies would be able to sell quality ultrasounds very cheaply in unregulated markets, but they don't.

      I was the sharp transducer guy, I think. Sorry I annoyed you, man. I'm just crotchety. With all my points I'm not trying to say the details are irrelevant- just that they are only details. They all add up. Making the probe comfortable turns a $3 part into a $300 part. The wiring becomes more expensive. Dozens of individual parts cost hundreds of dollars each, when design work is accounted for. However when all of those are tallied up, they are still too low to account for the price. Dozens of parts for hundreds of dollars each totals like six grand (+/- 3k) when engineering effort is included- at that cost you would expect to see ultrasound machines in the 10k range, but machines like that do not exist.

    2. A few things:
      1. Have you looked up the price of that TI DSP you mention?
      2. Beamforming involves a substantial amount of linear algebra, specifically, matrix inversions, especially for the kind of tomographic calculations required for ultrasound...which greatly exceed 100's of FLOP per data point.
      3. Doppler and other ultrasound methods require FFT's, which further adds to the computational reason why this feature is relatively recent, especially on high transducer count, high res machines.
      For example, 256 channels at 62.5MSPS is 16 GSPS. Assuming ~4Nlog(2)N for FFT big O complexity, and a reasonable N=256, is 16384 FLOP per 256 inputs, or 64*16GSPS, or ~1 TFLOP of real (not peak theoretical) performance. Even the best single chips now (stratix-10) can achieve 100 GFLOP/w peak theoretical, and GPUs are 30-50, and for FFT performance at this size, that drops to ~20 and ~10 GFLOPfft/W. meaning the Doppler will "cost" 50-100W, way more than can be put into the wand.
      Additionally, that FPGA is $6k for the chip, and a suitable GPU (that also has the sustained memory and I/O bandwidth to achieve the needed 10+GB/s is going to be comparably priced (especially if the memory needs are addressed).

      3. Accounting for the above, and the fact that transducers capable of doing what is required are far more expensive than you think, the true "minimum" cost, even forgetting all of the "ilities", is very likely toe well within an order of magnitude of the $150k, in fact, I would bet it wouldn't be cheaper than $30-50k BOM...and thus, the same thing that you cite regarding a cobbled together car (i.e. No "ilities") vs a real car actually leads to a different conclusion than you mention.
      4. You mention tesla. Okay, I'll play your game, a tesla (or any car other than an interstate semi) is an unreliable heap compared to medical devices on a lifetime hours basis. An ultrasound machine could see (judging by my OB's office) 3-4 hours of use, 5 days a week, for a decade, or approximately 10,000 hour. That's the equivalent of 250-500,000 miles on a car. Let me know when a tesla achieves that without substantial refurbishment of the major cost items (battery, power electronics, etc)--and unlike a car, the long term support costs ofhe emdicald evade have to e accounted for.

      I'm sorry, but unlike Paul, I AM going to be don't think you know what you are talking about, or at least you don't _really_ understand the requirements, assumptions, or real methodology.

    3. The 66AK2HO6 is $321,880 for 1000, according to the order now page. It'll do 150 billion MAC/s.

      Your figure for 4Nlog(2)N should be 8192 FLOPS per, and since basically every processor of that complexity has MAC its actually closer to 4000, and the final figure is around 256 GFLOPS. The next level of DSP up ($670) has 307. That's only relevant to cheap, low performance systems, which 256 channels is not. With the kind of money that goes into those computations you should be getting rack-mounted servers, not laptops with wheels.

      It doesn't have to go in the want. How would a GPU even fit in the wand? The power is irrelevant. The wire to the probe doesn't cost 3k, computing offboard is fine. Most importantly of all: you can just buy ten of these chips and chuck them at the problem and it still costs less than 5 grand. A system with 256 channels is gonna cost what, 100k?

      > Even the best single chips now (stratix-10) can achieve 100 GFLOP/w peak theoretical

      From here, it looks like 9 TFLOPS:

      I'm not trying to say Tesla makes medical grade cars. I'm saying they made electric cars cheap in an incredibly competitive market that was 100%, absolutely sure they could not make electric cars cheaply. Just because the market is competitive doesn't mean they are right. I'm sure the production costs of ultrasounds are extremely high, but they don't have to be. You don't need ASICs to make an ultrasound. Only the best ultrasounds have any business being that expensive.

    4. Ah, you're right on the Blu-Ray size, I was thining of a regular DVD, I've corrected that in the main text.

      While those bandwidths are not beyond the capabilities of electronics, it is in the realm of specialised systems. PCI Express 3.0 with 16 lanes, the best you can do right now at economy, hits 16GB/s. 4.0 will get you to 32 GB/s, finally reaching that level. Internal memory on graphics cards/GPUs can hit multiples of that, but it's fixed memory, usually soldered on, and tricky to get right - it's not cheap.

      As for the GPU, beamforming needs multiple functions performed, and each function can take multiple operations, so you have to take the GFlops number and divide it.

      GPUs are almost there, but not quite, this generation of NVidia's cards with Pascal seem promising, but from first release to developing reliable robust systems is a multi-year process. Those cards are also $6,000 to $10,000 each, not including the rest of the hardware needed to support them. Plus, you still need to digitize the signal, which means robust ADC in the system, and as mentioned above it's not cost effective to do it in every transducer (yet).

      Also, remember that a system involves not just the cart with ADC, filtering, beamforming, storage, software, display etc but multiple transducers, along with training, support/warranty.

      As for no incentive to build cheap ultrasound systems - plenty have tried. Many have succeeded, but you're looking at $50k per system not $150k. I also know people who've tried to do super cheap consumer level systems, handhelds etc, and they've not yet succeeded. The technology simply isn't there - yet - to make it viable. It would be like trying to do the first iPhone in 1995, where in theory all the parts are there (or almost there) you just need to make it, but in reality you're ahead of the game too much to succeed. Being too early in tech hurts you as much as being too late. But wait 10 years, and LCD displays become viable, along with just enough low power compute, touchscreen technology etc, and you can do it - then suddenly it explodes because everything, finally, came together and was 'good enough'. It'll happen in ultrasound, but likely not for 5-10 years+ (maybe groups like Butterfly Labs will shock me, I'll be happy if they do)

    5. GPU computation for ultrasound beamforming has been worked on for a few years at least. This PhD thesis on exactly that topic is a good background read if you are interested

      Or here's one from 2013 where they manage a whole 6 frames per second on a very basic imaging modality.

      It'll get there, people are working on it, but not quite yet.

    6. Plus: Siemens already uses consumer GPUs:

      Ti even has a demo of b mode processing on a $160 processor.

    7. It's a fair step from a demo and basic proof of concept to robust application within a commercial system. It's coming, but not quite yet.

      I can't comment on anything Siemens related specifically, it gets too close to proprietary knowledge - but what system do Siemens use a commercial GPU in, and what is its imaging performance relative to the standard hardware based beamformers? I don't think that paper says.

    8. I don't understand where you get your numbers from. A Pascal Titan is 1.5k, not 6-10k:

      But the fact of it is that ultrasounds are not rolling supercomputers. Siemens and GE are not outdoing NVIDIA, TI and Intel in terms of GFLOPS. If their computers are expensive its not because computers are expensive, because they aren't. The computing in ultrasound machines is not some special sauce.

      The iphone involved new technology, ultrasound does not. The computers in ultrasound machines are not supercomputers. From that pdf you posted, the FLOPS for plain beamforming =N*B*f, so ~900 MFLOPS for a 256 channel, 60 MHz system. That has been under $100 since 2003. It now costs ~8 cents. For a low performance system computing cost shouldn't even be factor, regardless of whether or not it is with more complex algorithms and equipment.

    9. I was quoting for the Pascal P100, which is the enterprise version and has additional memory, better double precision floating point operations, and a warranty that a commercial company can actually use, unlike the consumer parts which have to be evaluated carefully.

      Now that's what Nvidia charge for a reason, or are they in on the massive conspiracy to keep ultrasound systems priced high as well?

      Computing in ultrasound is not special sauce - correct, it's not. The maths is well known, there's a ton of papers and textbooks on it, but like any standard application the most effective way to do it takes experience, and no matter what you do if you want high quality images, it's a large amount of data to process, even by today's standards.

      Where did I say that any of those companies were looking at using anything other than Nvidia/AMD/Intel parts for GPU? The economies of scale they offer push it beyond what could be developed internally.

      I linked to a paper from 2013 that used a *basic* imaging mode and they got 6 frames per second, which is unusable for something like cardiac, and isn't even some of the advanced imaging modes. Where were they wrong? If it could have been done in 2003, why were they so poor in 2013? There's academic papers and glory to be had if you can make orders of magnitude leaps over that.

      At this stage all I can say is that if you are convinced that I'm wrong and the prices are far, far too high, then start a company, build systems, sell them, and wither make the world a better place or get bought by a larger company for your genius. You've just said it's dirt cheap and easy to build, so go ahead. What's stopping you?

      Seriously, what's stopping you?

    10. This page has a comparison between the Titan X and the P100 and their differences which are especially noticeable at double precision.

  2. 256 point FFT is 4*256*log2(256) or 4*256*8, which is the 8192 as you suggest, but that's on 256 points, so 32. You got me there.

    HOWEVER, you obviously don't understand how processors are specified, as EVERY measure of DSP, FPGA, GPU performance counts a MAC as TWO operations, not one.
    So we are at 512 GFLOPfft's
    Most GPU's have really crap performance at 256 point FFT size, to the tune of 10-20% efficiency (believe me, I've done it myself, with the code being optimized by some of the best GPU developers in the business). So that's the equivalent of 2.5-5 TFLOP peak theoretical, which is now in the realm of multiple GPU's, and several $k of card.

    The data rates are still the bottleneck, so you would need more than a "normal" hot system, more than likely a dual socket server class system.

    So just in the above, you would have $5k, and likely $6-7k of the compute hardware.

    Now, on the ADC side, you have at the lowest, about $5-10/channel at this rate, in quantity, just for the chips, and those are likely only 10-12bit, not the 14+ ENOB required, but hey, I'll give you the benefit of the doubt. That means $1250-2500 in ADC chips...and a comparable amount in DAC, so let's say $2500, not including PCB and supporting parts, like how to get 256 separate ~1 Gbps data streams into more usable data pathways (like pcie3.0)...but let's say that's free.
    We are now at $10k minimum in BOM, and we haven't gotten to 256 channels of instrumentation grade, 100-200Vpp amplifiers, plus multi-MHz, 100+dB isolation diplexers (oh, you didn't think of that? Sad for you) that have extremely low distortion...and 256 channels of instrumentation grade LNA for the receive side.
    Let's be generous and say those parts, due to cell phone tech, are single digit dollars each, say, $2-4. Well, that's another $1-2k.
    So we are at $11-15k and we haven't touched the cable or wand(s).
    Cable...well, we put everything in the cart, so we now need a 256-pair, 200V rated, extremely low cross talk cable that is uniform to <1% w.r.t. And has extremely low loss and matched imepdence over a 25-50% relative bandwidth at 10's of MHz.
    This would look like effectively the capability of a cat5 4-pair cable, but with 1/5th the area per conductor, yet comparable performance...oh yeah, and it has to withstand tens of thousands of coil/uncoil cycles, be <10mm in diameter, and provide medical grade isolation of 100's of volts in a life-safety way. Let's be nice and say that cable is $1000 for a 2-3m section.

    So now we are at $12-16k and we haven't touched the actual transducer wand(s), and I'm being EXTREMELY generous.

    1. >HOWEVER, you obviously don't understand how processors are specified, as EVERY measure of DSP, FPGA, GPU performance counts a MAC as TWO operations, not one.
      So we are at 512 GFLOPfft's

      No; the 66AK2HO6 does 2 MACs at once in each core, so it does the equivalent of 4 FLOPs per cycle, per core. GPUs would be awful for ultrasound because they are throughput choked and way too synchronous. I only brought them up for context of how little computing power ultrasound requires in the grand scheme of things. Youre also totally wrong on the cost- the new NVIDIA Titan is 11 TFLOPS for $1200:

      You are describing a system which you said had a 50k BOM. Give is another grand for the cart, one for the monitor, and one for the power supply, then 3 wands at 3k each. 24-28k. If include the actual cost of 11 TFLOPS of GPU, its 20-24k. Plus, the cost of the cable is irrelevant; its included in the cost of the wand.

      But really, those numbers are insane. There's no need for a 1k cart with a specially designed ergonomic keyboard. The cart should be $100 + a $1000 laptop. Those BOMs should be 20k or less, and they should be a quarter of that for low performance systems.

    2. Seriously awesome discussion though, loving the detail you're getting into.

    3. You cite all the components necessary for the ultrasound analog frontend (Pulsers, high-voltage multiplexers, low-noise amplifiers, ADCs, maybe others). What do you think about recent Maxim's MAX2082?

      Looks like it is a fully integrated analog frontend solution that contains all the components you mentioned. One of potential weak points is, maybe, ADC precision (12 bits). But for mid-range or handheld ultrasound it's enough?

      It's only 63$ and that's for 8 channels, so ~7$/channel in volume.

      That's 1800$ for the whole 256 channel analog frontend.

      Of course we'd also need an FPGA to receive all the raw data from ADCs over LVDS, preprocess it and send to the main computation device, or Receiving part could be done with Xilinx Artix 7 FPGA xc7a200t (300$), but there are problems with interfacing such system to a GPGPU server - even PCIe 3.0 x16 (note that this FPGA doesn't have hardware PCIe 3.0 x16 IP block, so we'd need an LVDS<->PCIe bridge IC) gives only 12gb/sec bandwidth in practice while we need 32 gb/sec. Thus for a GPGPU-based solution we'd have to lower the number of channels to ~80.

      The second alternative is to get somewhere the datasheet for NVLINK interconnect (it may be just LVDS, NVIDIA has mentioned that NVLINK has been designed it with connectivity to FPGAs in mind) and use 6000$ Nvidia P100 as a computer

      There is a third, more realistic option - apply a large FPGA, make it act as both the data receiver and the computer. Xilinx has high-performance high-density Kintex FPGAs , for example
      XCKU040 ~1400$ in volume (perf: 2.3 TeraMACs/s)
      XCKU115 ~7000$ in volume (perf: 6.5 TeraMACs/s)
      The arithmetic in these FPGA DSP slieces is 18bit (though reconfiguration to Nx precision and even floating point is possible if you pay with size and speed). Could ultrasound DSP algorithms be adapted for such precision? I know that military radars commonly use such FPGAs for DSP, so it should be possible (?)

      The fourth, capital-intensive option is to design and manufacture an ASIC for single chip ultrasound DSP. Recent NN accelerators (Nervana, Google TPU) show that 10x improvement over GPGPU is possible with such approach (though this requires using fixedpoint or custom low-recision FP arithmetic, using IEEE 754 FP would make win over GPGPU smaller, perhaps 2x). It is known that designing a SoC at 28 nm costs ~30M $, at 14nm ~80M $ (source: ). Such investment is possible only for high-volume device (but it is possible that low cost portable ultrasound could be just such a device).

      Lets say that a single transducer with a cable would cost another 3000$, 12-layer PCB - 100$ (consumer motherboards are 12 layer) and another 100$ for enclosure.

      If we settle on the large FPGA (6.5 TMACs/s, 7k) option then the whole BOM is 13000$.

      Criticism is appreciated, it'd be great to compile a BOM for portable, mid-end and high-end ultrasound systems.

    4. Noting that FPGA is a major determinant of cost, more questions:

      * How do we downscale the device to fit with cheaper XCKU040 ? I guess it should be enough for 128-channel system. This lowers analog price as well, lowering the 13k$ BOM by 6.4k.

      * Could one transducer be enough for majority of applications (for a hypotetic portable "stethoscope" device) ? If so, then this single transducer could be embedded in the device, without cable (though power consumption and heating would be hard issues - could active cooling save us here?)

      And another question: what's inside Philips Lumify ? How much is Lumify's BOM? Does Philips profit from selling it?
      I have a guess that Lumify is a 64-128 channel device that uses either MAX2082 + ASIC for processing, or something similar to MAX2082 - fully integrated ultrasound analog frontend.

      I wonder how would one go at building his own Lumify...

    5. Basically, I'll say the following:
      1. Based on current bleeding edge tech, it is -possible- to get the raw BOM of a modern ultrasound cart+cable down to the ~$15k range
      2. Based on having fabricated extremely high fidelity, close packed, high performance ultrasound transducers, and having substantial hands on experience with all the options here (including having made some of the highest, if not the highest, performance CMUT and PMUT devices ever), that it is possible that the wand cost could be brought down to a BOM of $2-4K each, in quantity with the required stability, thermal performance, bandwidth, and consistency.
      3. All that yields a BOM-only cost of $20-30k depending on other options, fabrication costs, and quantity (the above assumes that anything custom would have reasonable yields and be produced in the thousands, minimum)

      For something that has the associated "ilities" (which you don't address, but I have experience with), having a list price to BOM cost ratio of 3-5 is entirely reasonable when there is millions of design work, testing, etc, that has to be spread across only a few thousand devices.

      Additionally, lifespan testing, support, etc all factor into that.

      Hell, the iPhone has a list:BOM ratio of 1.5-2 depending on model and incentives, and that isn't bleeding edge, medical grade, has a shorter lifespan AND is made in the hundreds of millions.

      I'm sorry, but I'm sticking to the "you just don't understand" assessment of your bitching.

      If I'm wrong? Then perhaps you should put your money where your mouth is, and _profitably_ (remember, Siemens, GE, and others have to amortize into the cost their development and backend costs...unlike most startups, so this is NET profitability, not the gross profitability used by "disruptive" startups who effectively hide their net costs in overly valued equity sales) make a low cost ultrasound system.

      If you are right, it should cost you "only" a few million in R&D, meaning, given the hundreds of millions of sales and huge markups possible if you undercut everyone by 2x with your 4x less expensive system, you should easily be able to get the venture funding....

      In fact, it's AMAZING, given the massive profit potential assumed by your figures, why there aren't tens of startups pursuing this I'm a race to be on sale, acquired for massive amounts, or go public...strange....or maybe I'm correct and you don't know what you are talking about.

  3. You really don't get what is as saying, in almost any way.
    That DSP and its 150 Gflops, is 75 GMAC/s, not 150. Whenever you see a peak theoretical FP number, it will be counting MACs as two ops. I.e. GMACS=GFLOPS/2.

    And you also miss the point that peak theoretical isn't the same as realizable, and the actual small-FFT efficiency of that Titan is even lower than I originally described, as its data retention limited, not calculation limited, until you get substantially above 256pt FFTs.

    And the cable is included in the cost of the wand?'s idea why you you think it would be.

    You also miss the point that BOM vs list is the big issue, and one you don't address. Even if with the most modern tech the BOM is $24k, even at iPhone levels of list vs BOM, that would mean a list of $40-50k...which I think. It's entirely possible for the emit generation of systems, given all the tech advancements I think we can agree on.

    But that is a far cry from the $1's to 10's of thousands numbers strewn about in this and other threads, which are simp,y unrealistic...but someday (5-10yrs?) might be possible.

  4. *data rate, not retention.

    As for the efficiency, here is another way to look at it.
    PCIe3.0x16 is 16 GBps peak theoretical, each way. Actual realizable is usually 50-75% of this for large transfers, so let's call it 12 GBps.
    For 32bit FP, that's 3 billion words/s, or ~3/16ths of our 256 transducer example.

    So those FFT's (as you validly pointed out, 32 ops/sample) would only consume ~100 Gflops of computational the efficiency of that card would be ~1% of peak theoretical. Now for larger FFT's it's much higher. For instance, for 256k point FFT's, it's 320 ops/sample, and the TITAN would be at 10%. Doing the fixed to float conversion on the card would boost the transaction rate on input by a factor of 2, but the output would still be 3 Gwords/s sustained or less, so you would have to do some other maths.

    My point remains that the increased computational horsepower of modern current generation GPUs isn't as big of a factor for this application due to the data rate and operations required. This is the same problem we run into with large RF systems and radars, and why FPGA's (effectively configured as matrix DSPs) are more typically used in Doppler and beam forming systems when compactness is required, albeit at higher costs.

    So what you envision now using GPUs is actually well known in industry, and will likely reduce costs because instead of $10k FPGAs * a few, they can use $500-1000 GPUs * a few...or a cost reduction of a few $10ks...but bear in mind, just 2-3yrs ago, those figures were 2-4x different (damn moores law)

    So I stand by my conclusion, it's entirely possible that the cost for a given performance will be reduced by a factor of a few in the next generation.

    HOWEVER, I would bet my substantial salary that the cost per machine stays constant, and the capability instead increases, as the improved diagnostic performance is worth the cost, for most markets.

    That said, I bet you will see the same multi-level fracturing of the market into minimal cost low capability systems up to expensive, high performance is the case today, but one reason you don't see the low cost machines making substantial market inroads is because used machines cane obtained at comparable cost to a new, but equivalent performance "disruptive" device...hence why the best disruption is actually occurring (and those technologies, like GPU, better transducers, etc are being applied at the high end.

    The same holds true for many other industries, like lab equipment, or semiconductor fab...yeah, you could make a low-performance wafer handling system very cheap now...but you wouldn't, because the market naturally upgrades and there is still capital life left in the older, but equivalently performing used systems on the market.

    *In actuality, it's closer to 50% when the transactions are equally bidirectional due to memory bandwidth on the CPU end, but I'm being generous.

  5. This comment has been removed by the author.

  6. Note: a review of component selection for modern ultrasound systems confirms FPGAs.

    1. This is a ~6 year old paper, and a lot has changed in that time. Can FPGAs be used to do beamforming? Yes, they can. Are they cost-effective and practical? That depends on requirements of the system in question - that they have not been indicates that they are not (yet?) economically the right choice. Note this line from that paper:

      "An FPGA-based beamformer can consume up to 25 times more power per channel than an ASIC implementation. "

      That's pretty significant, especially if you want to make it portable. And not toasty-hot.

      Will usage change in future? Maybe, Intel are now making roads here and incorporating into CPUs, but my money is more on GPUs and Intel's equivalent (Xeon Phi). Energy efficiency is important, real world systems don't get to ignore it.

      Otherwise, it's a nice summary article.

  7. Also interesting: integrated MEMS ultrasound devices have in fact already invaded the smartphone market:
    Xiaomi has already shipped a 300$ smartphone with this sensor
    Given their margins, the sensor's price should be <10$.

    So, the smartphone vs ultrasound machine comparison from then HN thread becomes quite literal.

    And another one: there is a competitor to Philips Lumify called Clarius:

    Chinese are already manufacturing devices similar to clarius for "200-1500$":

    The question remains: which transducer and beamformer approaches do these ultraportable systems use? How many channels do they have?

    1. Those fingerprint sensors are significantly different from ultrasound transducers used for medical. And with the best will in the world, they could not be used as such. An ultrasound medical device needs an acoustic window sufficient to view the organ of interest and capture the echo signals (physics and anatomy are pretty limiting here!). It needs to be the right frequency range to resolve the artifacts required, and also to penetrate deep enough (in combination with power). It needs to resolve over a wide depth, and scan at angles. Those sensors don't do much of that.

      I've worked on projects in both areas, they're not the same. If you look at the specs of a fingerprint sensor, it's vastly different, and you can also get away with much less detail in the signal compared to when you make a medical diagnosis. They're great engineering, but the compromises made are totally different.

      Yes, MUT devices are growing in capability, as I mentioned above, and the community is well aware of them (and has been for 20+ years) with companies having invested large amounts of time, money, and effort but there are few products on the market yet - they're just not ready except in niche applications (such as fingerprint sensors).

      Comparing these and even a medium end system is like comparing a child's motorised bike and a well kitted out family van. Each serve their purpose and price point, but if you want to take your family of 5 on a road trip, safely and quickly, across state, then it's pretty clear which you choose.

      It's not a literal comparison if you actually compare specs, they aren't even remotely the same.

  8. Could you please fix your figures - you're confusing gigabytes per second with gigabits per second (Gbps vs Gb/s) - one time it's 224 Gbps, the other time it's 224 Gb/s.
    And Thunderbolt supports only 10 Gbps (gigabits per second).

    1. Both are the same. Small 'b' is bits, large 'B' is bytes. 224Gbps is the same as 224Gb/s, and is 28 GBps or 28 GB/s. Nothing to be fixed, other than to be more consistent in the method of noting units. The units themselves are correct.

      Thunderbolt 1 is 10 Gbps. Thunderbolt 3 goes to 40 Gbps and has been available for about a year, though isn't too common yet.

  9. This comment has been removed by a blog administrator.