Friday, March 22, 2013

Tablets are the New Mobile

Tablets are the new mobile, the slide deck in the article shows the growth of mobile, tablets in the Apple and Android eco-sphere. there are many other slide at the link below. The future of mobile follows the trajectory of April 15, 2009 Mobile Computing the Next Big Market.


Insightful, timely, and accurate semiconductor consulting.Semiconductor information and news at -

Future Of Mobile [SLIDE DECK]


Henry Blodget and Alex Cocotas | Mar. 21, 2013, 11:37 AM

Read more:

To kick off the conference, our BI Intelligence team—Marcelo BallvĂ©, Alex Cocotas, and I—put together a deck on the current trends in mobile. We looked closely at the growth of smartphone and tablet adoption, the platform wars, and how consumers are actually using their devices.
See more 


Wednesday, March 20, 2013

Samsung Galaxy S4 BOM $236 (Manufacturing Cost)

The article below discusses "latest model of Samsung's flagship phone costs $236 to produce, according to information analyzed by IHS...

Samsung manufactures at least $149 worth ( 63%) of the parts needed for the S4, and IHS thinks Intel and Broadcom supply the rest" 

As Samsung improves yield of their manufacturing process, the cost will go down significantly. For example, the cell phone microprocessor Exynos die "cost is $30, compared with the S3's chip, which costs $17.50."

As Samsung advances dies yield riding down along the learning curve of the manufacturing process, it will reduce the BOM 5%. Similar improvements in manufacturing of other chips made by Samsung will bring Galaxy S4 cost closer to that of S3.


Insightful, timely, and accurate semiconductor consulting.Semiconductor information and news at -

At $236, Galaxy S4 costlier to produce than S3, says report

by Donna Tam

IHS iSuppli's virtual teardown of the lasted Samsung smartphone shows it's 15 percent more expensive for Samsung to produce than last year's model.

The Samsung Galaxy S4's production costs are 15 percent higher than those of its predecessor, thanks to an upgraded display, sensors, processor, and memory, according to an IHS iSuppli teardown.

The latest model of Samsung's flagship phone costs $236 to produce, according to information analyzed by IHS. The firm said its estimates could change with a physical teardown, but for now the improvements equal a heftier cost. The HSPA+ version of the S4 has 16 gigabytes of NAND flash memory and costs $244 in materials plus $8.50 to manufacture. The LTE version is a bit cheaper at $233.

"Among the upgrades are a larger, full high-definition (HD) display; a beefed-up Samsung processor; and a wealth of new sensors that set a record high for the number of such devices in a smartphone design," Vincent Leung, senior analyst for cost benchmarking at IHS, said in a press release.

Click to enlarge

The S4's new AMOLED 1,920x1,080-pixel display is where a bulk of the increased costs came from for both the HSPA+ and LTE versions. It costs $75 versus the S3's $65 display. IHS' analysts said it's the first phone to have an AMOLED display with this resolution, since it's been a challenge to squeeze pixels into this type of display in the past.
IHS also notes the high number of sensors on the S4. It's not surprising given the smartphone's multiple new features, such as integrated eye-tracking software and the ability to monitor health stats. In addition to sensors that were available in the S3 -- geomagnetic and proximity sensors, as well as an accelerometer, gyroscope, and barometer, among others -- Samsung added a new infrared gesture sensor and a humidity and temperature sensor. The barrage of sensors costs $16, up from $12.70 in the S3.
To run the HSPA+ phone's apps, Samsung "is believed to be" using an eight-core chip of its own design, the Exynos 5 Octa, according to IHS. The cost is $30, compared with the S3's chip, which costs $17.50.
Wayne Lam, senior analyst for wireless communications at IHS, said in the release that the processor integrates two quad-core processors into one chip. This lets the phone assign less intensive tasks, like phone calls and social-media apps, to the less powerful processor, saving power. The more powerful processor is used only for things like video gaming.
The 4G LTE version uses the Qualcomm Snapdragon 600, a quad-core apps processor and LTE radio solution, which costs $20.
Samsung manufactures at least $149 worth -- or 63 percent -- of the parts needed for the S4, and IHS thinks Intel and Broadcom supply the rest.

Tuesday, March 12, 2013

Apple A7 Foundry:TSMC, Samsung, Intel?

Reuters last week suggested that Apple and Intel have at the very least discussed the option of introducing its chips in to the iDevice range. It would make sense for Apple to diversify its fab manufacturing.

More is discussed in the report below (the updae). However it could be also just rumor to impact the real negotiations between Apple and its vendors.

More about it in an October Next iPhone A7 Made by TSMC not Samsung .

Update - There is an interesting discussion on Linkedin about Intel as Altera foundry

Insightful, timely, and accurate semiconductor consulting.
Semiconductor information and news at -

"Is Intel ready for foundry?

Started by Ron Jones

The announcement this week that Intel would start providing foundry services to Altera moves them from a toe-in-the-water foundry provider to a player. On the surface, the question of whether Intel is ready for foundry might seem a bit silly.
Intel has leading edge technology, a massive manufacturing engine and world class controls on yield and cycletime. They have a highly integrated environment where design, product and process engineers have access to fab data, planners have availability of WIP information and sales orders, etc. This works great for them as an IDM.

Being a foundry is not just about building wafers for your customers, however. There is a broad spectrum of customer facing data and information that fabless or fab-lite customers require from their foundry on a regular and near real time basis. This includes:

• WIP reports (both in snapshot and lot-move format)
• Yield loss information both in process and at probe
• PCM data
• WAT data
• Lot start, WIP status, lot hold, ship alert, . . . information
• E-commerce information
The foundries have spent well over 15 years developing and refining a broad range of customer facing systems that provide this to their customers, both large and small. The information flows from WIP management and multiple other systems into data warehouses that are partitioned to keep customer data protected. It is a massive undertaking and one that continues to be refined today...."

and a reply by Andy Turudic ...

"Altera is ONE customer, a customer that is very savvy at doing pretty much every part of wafer manufacturing THEMSELVES, except the dipping silicon in beakers, and making almost all the transistors work, part. And Altera has IP to where ALMOST all the devices working is perfectly acceptable and yields passing devices at the billion+ transistor count level.

Altera's move to Intel is merely ONE customer and that does not make Intel a "player", but a shrewd business strategist that is moving to fill its fabs with large volume, BIG die, devices that pay a premium for high yield without compromising gross margin for either partner, and by taking on process-savvy customers vs every Joe Startup that comes along and needs caressing, handholding, WIP and process monitors, and a bureaucracy of "standard" support because they are not worth the time of the foundry in providing direct support. With Altera, Intel can remain an "IDM", no differently than it has two process flows in its captive fabs today. Both companies have honorable business dealings, so there's no fear of IP theft as there could be in an offshored scenario.

Intel is a world class fab, arguably at or very near the top of choices, that is capable of yield in the billion+ transistor class. Those are few and far between, as is demonstrated by Altera's competitor who appear that they cannot yield big die and instead are looking at 3D stacking smaller ones.... "

Rumor: Intel could land 10% of Apple's 'A7' chip orders
By Sam Oliver , Tuesday, March 12, 2013, 08:27 am

As Intel gets into the chip contracting business, the company could obtain as much as 10 percent of Apple's next-generation mobile chip orders, insiders believe.

"Institutional investors" cited by DigiTimes on Tuesday believe Intel could be making a play to get a slice of Apple's business for its so-called "A7" chip, expected to power the company's next-generation iPhone. Apple has reportedly been looking to move its chip production contracts away from rival Samsung, which currently handles all of the company's current A-series chips.

The company expected to take the bulk of the work away from Samsung is Taiwan Semiconductor Manufacturing Company. Rumors have claimed for years that TSMC is on the brink of building chips for Apple, but that has yet to happen.

Tuesday's report claimed that both TSMC and Samsung are competing for contracts to build "A7" chips for Apple. It said that production of A-series chips through TSMC is expected to begin in 2014.
Now, institutional investors reportedly believe that Samsung will receive about half of Apple's "A7" orders, while TSMC will take 40 percent, and Intel will grab the remaining 10 percent.

"In the past, Apple's processor orders were unattractive because of low profit margins and Samsung was the only cooperating firm," the report said. "In addition, at the time Samsung's smartphones were no threat to Apple's iPhone. But Samsung has since become the biggest smartphone vendor in the world."

Just last week, a separate report suggested that Intel and Apple were in talks for Intel to potentially build next-generation chips for devices like the iPhone and iPad. Intel may be making a shift to build ARM-based systems-on-chips for companies like Apple after the PC market has struggled in recent years against smartphones and tablets.
Intel's current CEO, Paul Otellini, plans to retire in May, and some market watchers believe a new chief executive could push the company in a different direction. In particular, contracts to build custom chips for mobile device makers could help keep the chipmaker's manufacturing facilities working at full capacity.

Wednesday, March 6, 2013

Proving Obviousness in Patent

The article below explains the difficulties in proving prior art (IP) obviousness.

Insightful, timely, and accurate semiconductor consulting.
Semiconductor information and news at

Proving Obviousness in Patent Cases From the Experts
By John Haynes and Rishi Suthar All, March 6, 2013

Companies accused of patent infringement often ask “How did they get a patent on that? It is so obvious.” The reality is that most modern patents are nothing more than a combination of known elements assembled in a novel—or not so novel—manner. The challenge for accused infringers is how to prove that the claimed combination of elements was known to those skilled in the art. The “smoking gun” in such cases is a single prior reference that discloses the exact same combination, but such smoking guns are all too rare. Accused infringers are thus often forced to argue that the patented combination would have been obvious to others using only the ordinary skill of those in the field.

Practitioners generally agree that since the U.S. Supreme Court’s ruling in KSR v. Teleflex (2007), proving obviousness has become easier, because KSR eliminated any rigid formula for proving obviousness and placed renewed emphasis on the problems being solved and the use of plain common sense. KSR’s looser framework, however, can also create a trap for the unwary who fail to gather the proof needed to satisfy even the looser standards set forth in KSR.

Awareness of these traps, and how to avoid them, is essential to the successful litigant.

Common Pitfalls in Proving Obviousness

1. More is not always better: Accused infringers may spend hundreds of thousands of dollars creating claim charts that map out invalidity theories across dozens of prior art references. These charts often focus on finding each individual element of a combination in as many places as possible, but rarely focus on the reason why a “skilled artisan” (i.e., a person having ordinary skill in the art) would want to put those separate pieces together. Proving obviousness is like putting together a jigsaw puzzle: You certainly need all the pieces, but if you do not put the pieces together the jury will never understand the full picture.

2. Legal rhetoric is no substitute for competent evidence: Attorneys often make the mistake of using legal arguments as the glue to hold their puzzle together. Bald assertions that a person skilled in the art would know how to fit the puzzle together, however, do little to explain what the puzzle should look like or why the skilled artisan would know exactly where the pieces fit
A recent Federal Circuit decision, ActiveVideo v. Verizon (2012), provides a good example of how lawyer rhetoric is rarely sufficient to demonstrate obviousness. In ActiveVideo, the defendant’s expert testified that each of the claim elements were “modular” and could be combined to achieve the claimed combination because of efficiency and market demand. The expert’s assertions were rejected, however, because he failed to explain how the “modular” puzzle pieces fit together in the exercise of ordinary skill.

3. Common sense must be common knowledge: Although KSR placed renewed emphasis on the use of common sense to solve ordinary problems, it did not sanction the use of “common sense” to plug holes absent proof that the skilled artisan was aware of both the problem and the common sense solution. Litigants need to frame the problem facing the skilled artisan and then explain how that artisan would use their ordinary skill to solve the problem in the same manner as is claimed in the patent.
This requires more than lawyers’ arguments about “common sense” dictating the claimed combination. It requires a careful analysis of how the skilled artisan would have approached the problem, the tools available to solve it, and the reasons why solving it would have resulted in the claimed combination. In the jigsaw puzzle analogy, you need the pieces, the picture on the box, and the knowledge that putting the pieces together will yield that picture.

How to Avoid the Pitfalls

As the Supreme Court stated in KSR, the obviousness analysis is a flexible one. In other words, the approach is highly dependent on the technology at issue and the problems the patent seeks to address. Regardless of the approach, however, following a few simple guidelines will go a long way towards a successful defense.

1. Don’t forget the basics: The Supreme Court provided the framework for proving obviousness in the seminal case of Graham v. John Deere (1966). KSR did not lessen the importance of this framework, and litigants should always begin an obviousness analysis by outlining the proof needed to prevail on each element of that framework. This requires not only developing a clear picture of the prior art, but also a clear picture of the industry at issue, market pressures, and secondary considerations like long-felt need and commercial success.

2. Analyze the invention as a whole: Most inventions are comprised of combinations of known elements, so proving obviousness requires more than finding each piece of the puzzle. Care must be taken to explain, in detail, the modification to the prior art as a whole.
Obviousness in some cases, such as in KSR or Stone Strong v. Del Zotto (2011), may be satisfied by explaining the reasonable expectation of success of combining the prior art’s known elements, but more complex inventions may require expert explanation of why one skilled in the art would have been inclined to combine the teachings in seemingly unrelated disciplines. Although composing an owner’s manual is unnecessary, enough detail should be presented so that it is clear that one skilled in the art would recognize and perform the modifications to the prior art with anticipated results.

3. Always identify a convincing reason why the specific combination of known features claimed in the patent would have been obvious to the skilled artisan: While the Supreme Court acknowledged in KSR that the prior art need not explicitly provide a motivation to combine prior art features, it did not eliminate the requirement that the skilled artisan have a reason to combine prior art features in the manner claimed. In KSR, the reasons to combine stemmed from market pressures of transitioning from cable-actuated throttles to drive-by-wire systems, in connection with common problems existing in the prior art. Motivation may also be shown through “common sense” or by demonstrating that the claimed combination would have been “obvious to try” in order to solve a particular problem. To prevail using such motivations, litigants must do more than simply repeat the language of KSR.
As the Federal Circuit recently explained in In re Novel (2012), succeeding on a common sense theory requires the accused infringer to provide a rational explanation as to why the skilled artisan would consider a specific combination a product of common sense. Similarly, when relying on an “obvious to try” rationale, care must be taken to explain precisely why there are only a finite number of solutions to the problem at hand, as well as why the particular modification to the prior art yields nothing more than expected results. See Perfect Web v. InfoUSA (2009). Even though conclusory statements in these scenarios are tempting, they will almost guarantee an unsuccessful defense.

4. Develop a solid evidentiary record for each aspect of the obviousness inquiry: This requires forming obviousness theories early, using the discovery process to develop those theories, and then carefully funneling the evidence to support them through fact and expert witnesses.
While obviousness determinations often turn on the sufficiency of an expert’s opinion, successful litigants should not assume that experts alone will carry the day. It is just as important to gather literature and fact-witness testimony that directly or indirectly supports the obviousness defense. Litigants should seek to introduce testimony from knowledgeable individuals in the industry to establish known problems and known solutions in the prior art, as well as reasons why it is desirable to combine those solutions in the manner recited in the claims of the patent. Such evidence will not only bolster the expert’s opinion, but may also help preserve important invalidity defenses in the event the fact finder does not give the expert’s opinions much weight, or in less complex cases, no weight at all. See, e.g., Stone Strong v. Del Zotto (2011).

While no set blueprint exists for guaranteeing a perfect obviousness combination, the main point to remember is that proving obviousness requires proof, and a lawyer’s argument or conclusory expert testimony is not sufficient. Careful litigants form their obviousness theories early in the case and develop the factual record to support those theories. Keeping each of the above points in mind throughout litigation will put you on the right track to avoiding the usual pitfalls and presenting a convincing invalidity defense.
John Haynes is a partner in Alston & Bird’s Intellectual Property Litigation Group and has experience litigating a wide variety of disputes involving complex technology and intellectual property. He specializes in complex patent cases involving electronics, wireless communications, computer software, and networking. Rishi Suthar is an associate in the firm’s Intellectual Property Litigation Group. Prior to attending law school, Rishi worked as a patent examiner for the U.S. Patent and Trademark Office.

Monday, March 4, 2013

Storage Memory Tipping Point

The storage tipping point is upon us
Ambuj Goyal 2/27/2013 2:01 AM EST

Keeping pace with the onslaught of Big Data requires a revolutionary approach to storage. It’s well known that the performance of computing has advanced at a much faster rate than the systems that store and retrieve the information they generate.

For many, this game of catch-up has existed since the first digital storage systems were introduced more than 50 years ago. At the time, the concern was no longer about the performance of computing, but about creating a digital storage system that could keep up with it.

Thus began the subsequent watershed moves from paper punch cards to magnetic tape and then hard disk drives—moments that revolutionized computing and ultimately the world in which we live.

Today we find ourselves at another critical technological juncture that once again is demanding a revolutionary approach to storage—an approach that will help it keep pace with not only computing, but with the information onslaught of Big Data.
To understand where the storage industry is headed, one need only look to the reason that computing has historically outpaced it. Unlike the storage industry, computing has continually leveraged and advanced semiconductor trends while storage systems have remained mechanical, with motorized wheels of tape or spinning disks. In fact, computing shifted from mechanical devices more than a hundred years ago, while digital storage, for the most part, remains tethered to technologies born out of the 1950’s.

Not any longer. We are at the tipping point of a new era of computer storage that will witness entire systems based on flash semiconductor memory to handle fast moving, operational data in real-time. Though flash has been utilized in a variety of capacities over the past 30 years and in hybrid storage systems over the past several years, complete flash systems will dominate the landscape in the coming months and years.

All-flash systems will not only provide exponential performance gains over mechanical and hybrid systems, they will help organizations dramatically lower data center energy consumption rates due to their inherent low power-consuming memory and lack of moving parts—no small feat. According to a 2011 study by Stanford University, data centers account for 2 percent of all the electricity consumed in the U.S.

Improved performance, reliability, and durability

To be sure, such systems will be the storage platform of choice to handle ever-growing and increasingly critical workloads such as credit card processing, stock exchange transactions, manufacturing and order processing systems. Even app stores on the web are starting to use flash-only storage.

Such attributes as improved performance, reliability, and durability make flash systems desirable today, but virtually mandatory for the future. That’s because the tsunami of Big Data shows no sign of receding. Researchers predict that the digital universe—all the digital information created around the world—will hit 8 zettabytes by 2015. That’s about all the data found in the U.S. Library of Congress times 800 million.

Today we find ourselves at a tipping point of computer storage, once again, where the challenges of computing are no longer at issue but, rather the storing and retrieving of the information they generate and share.

Through our acquisition of Texas Memory Systems last fall, we have systems that can store almost 24 terabytes of storage in a unit the size of a pizza box, and that provide access to data 100 times faster than mechanical storage. If we were to stack 42 such pizza boxes in a rack, it would provide 1 petabyte of storage, which is more storage than any single operational application requires.
The industry is moving rapidly in the direction of all-flash storage for operational information. Such systems will not only help organizations respond to, and exploit the challenges of Big Data today and tomorrow, but once again will change the future of computing and the possibly the world along with it.
Ambuj Goyal is general manager of IBM System Storage & Networking.