reverse-engineering

 

Whether it's rebuilding a car engine or diagramming a sentence, people can learn about many things simply by taking them apart and putting them back together again. That, in a nutshell, is the concept behind reverse-engineering—breaking something down in order to understand it, build a copy or improve it.

 

A process that was originally applied only to hardware, reverse-engineering is now applied to software, databases and even human DNA. Reverse-engineering is especially important with computer hardware and software. Programs are written in a language, say C++ or Java, that's understandable by other programmers. But to run on a computer, they have to be translated by another program, called a compiler, into the ones and zeros of machine language. Compiled code is incomprehensible to most programmers, but there are ways to convert machine code back to a more human-friendly format, including a software tool called a decompiler.

 

Reverse-engineering is used for many purposes: as a learning tool; as a way to make new, compatible products that are cheaper than what's currently on the market; for making software interoperate more effectively or to bridge data between different operating systems or databases; and to uncover the undocumented features of commercial products.

 

A famous example of reverse-engineering involves San Jose-based Phoenix Technologies Ltd. used in VAIO, which in the mid-1980s wanted to produce a BIOS for PCs that would be compatible with the IBM PC's proprietary BIOS. (A BIOS is a program stored in firmware that's run when a PC starts up; see Technology QuickStudy, June 25.)

 

To protect against charges of having simply (and illegally) copied IBM's BIOS, Phoenix reverse-engineered it using what's called a "clean room," or "Chinese wall," approach. First, a team of engineers studied the IBM BIOS—about 8KB of code—and described everything it did as completely as possible without using or referencing any actual code. Then Phoenix brought in a second team of programmers who had no prior knowledge of the IBM BIOS and had never seen its code. Working only from the first team's functional specifications, the second team wrote a new BIOS that operated as specified.

 

The resulting Phoenix BIOS was different from the IBM code, but for all intents and purposes, it operated identically. Using the clean-room approach, even if some sections of code did happen to be identical, there was no copyright infringement. Phoenix began selling its BIOS to companies that then used it to create the first IBM-compatible PCs.

 

Other companies, such as Cyrix Corp. and Advanced Micro Devices Inc., have successfully reverse-engineered Intel Corp. microprocessors to make less-expensive Intel-compatible chips.


 

Few operating systems have been reverse-engineered. With their millions of lines of code—compared with the roughly 32KB of modern BIOSs—reverse-engineering them would be an expensive option.

HPE and Citrix Deliver the “Workspace of the Future” Today

 

BrandPost Sponsored by HPE

 

HPE and Citrix Deliver the “Workspace of the Future” Today

 

Satisfying the needs of users and IT requires a flexible IT architecture that can accommodate and balance cloud, mobile devices, IoT, and data analytics.

 

But applications are ripe for reverse-engineering, since few software developers publish their source code. Technically, an application programming interface (API) should make it easy for programs to work together, but experts say most APIs are so poorly written that third-party software makers have little choice but to reverse-engineer the programs with which they want their software to work, just to ensure compatibility.

 

Ethical Angles

 

Reverse-engineering can also expose security flaws and questionable privacy practices. For instance, reverse-engineering of Dallas-based Digital: Convergence Corp.'s CueCat scanning device revealed that each reader has a unique serial number that allows the device's maker to marry scanned codes with user registration data and thus track each user's habits in great detail—a previously unpublicized feature.

 

Recent legal moves backed by many large software and hardware makers, as well as the entertainment industry, are eroding companies' ability to do reverse-engineering.


 

 

"Reverse-engineering is legal, but there are two main areas in which we're seeing threats to reverse-engineering," says Jennifer Granick, director of the law and technology clinic at Stanford Law School in Palo Alto, Calif. One threat, as yet untested in the courts, comes from shrink-wrap licenses that explicitly prohibit anyone who opens or uses the software from reverse-engineering it, she says.

 

The other threat is from the Digital Millennium Copyright Act (DMCA), which prohibits the creation or dissemination of tools or information that could be used to break technological safeguards that protect software from being copied. Last July, on the basis of this law, San Jose-based Adobe Systems Inc. asked the FBI to arrest Dmitry Sklyarov, a Russian programmer, when he was in the U.S. for a conference. Sklyarov had worked on software that cracked Adobe's e-book file encryption.

 

The fact is, even above-board reverse-engineering often requires breaking such safeguards, and the DMCA does allow reverse-engineering for compatibility purposes.

 

"But you're not allowed to see if the software does what it's supposed to do," says Granick, nor can you look at it for purposes of scientific inquiry. She offers an analogy: "You have a car, but you're not allowed to open the hood."

1pixclear.gif

 

The Clean-Room Approach To Reverse-Engineering

The Clean-Room Approach To Reverse-Engineering

One person or group takes a device apart and describes what it does in as much detail as possible at a higher level of abstraction than the specific code.    That description is then given to another group or person who has absolutely no knowledge of the specific device in question.           This second party then builds a new device based on the description. The end result is a new device that works identically to the original but was created without any possibility of specifically copying the original.

How to crack open some computer chips and take your own die shots

 

    By Sebastian Anthony on November 21, 2012 at 11:43 am

    8 Comments

 

    Facebook

    Twitter

    Google Plus

    Reddit

    Hacker News

 

22nm silicon die and wafer (Intel, Knights Ferry)

 

If you read ExtremeTech regularly, you will have noticed that we love die shots — close-up photographs of the transistors, wiring, and other circuitry at the core of every digital computer chip. How are these beautiful photographs taken, though? Well, I’m glad you asked, because you’re about to find out.

 

First, a little background. The core of a computer chip is called a die because hundreds of individual dies are diced from a large wafer of silicon. These individual dies are then packaged, both for protection, and to provide an interface between the die and the logic board. At this point, the die becomes an integrated circuit (IC). This packaging is usually a dense plastic or ceramic.

 

Now, there are two ways of photographing a chip die: You either take a photo before the die is packaged, or you have to remove the packaging. The first route is obviously preferable, but it’s only available to the chip’s manufacturer. These die shots are usually very posed, often with dramatic lighting. Most of the die shots that you see on ExtremeTech are of this variety. The second method is usually carried out by companies that specialize in reverse engineering, such as Chipworks. Such reverse engineering might be for promotional reasons (as with the Apple A6 SoC), but it’s usually because another company (say, AMD) is paying Chipworks to reverse engineer the die of Intel’s latest CPU.

 

According to ZeptoBars, a Russian reverse engineering company, this process doesn’t even require special, expensive equipment — you can remove the packaging and take some pretty die shots yourself. It’s worth noting that this process is very dangerous, though — so, please, don’t try this at home.

 

A bunch of computer chips, in a bath of sulfuric acid

Sulfuric acid

 

In essence, removing the packaging requires just one step — but it’s one heck of a step. You need to place your computer chips in a glass pot, pour in enough concentrated sulfuric acid to cover the chips, and then heat the container until the acid boils (337C, 638F). After 30-40 minutes, the plastic packaging is “burned” (dehydrated) away, leaving carbon. Larger packages may need a few trips to the sulfuric acid bath to burn away all of the plastic. It isn’t clear if this same process works on ceramic packaging (ceramics are generally quite resistant to corrosion, but concentrated sulfuric is about as corrosive as it gets).

 

If the first process leaves chunks of carbon attached to the dies, a quick dunk in a hot bath of concentrated nitric acid will dislodge them. What’s left looks something like this:

 

Some computer chips, after they've had their packaging removed with sulfuric acid

 

Then, all that’s left to do is to take some actual photos of the exposed dies. ZeptoBars doesn’t explain its photographic process, but the company probably uses a high-resolution digital camera with a macro lens. There might be some reversed-lens macro magic going on, too. In the case of a professional reverse engineering company such as Chipworks, a mix of photography and microscopy will be used.

 

Updated: We contacted ZeptoBars, and they in fact use a metallographic microscope, with a digital camera attachment.

Atmel ATmega8, an 8-bit microcontroller, made on a 500nm process node

 

Atmel ATmega8, an 8-bit microcontroller, made on a 500nm process node

STMicroelectronics microcontroller, based on an ARM Cortex-M3 core

 

STMicroelectronics microcontroller, based on an ARM Cortex-M3 core

MIFARE RFID chip die shot

 

MIFARE RFID chip die shot

 

For more beautiful die photos, hit up ZeptoBars, the IC Die Photography blog or CPU-World.

Intel’s 14nm Broadwell chip reverse engineered, reveals impressive FinFETs, 13-layer design

 

    By Joel Hruska on October 30, 2014 at 11:32 am

    52 Comments

 

    Facebook

    Twitter

    Google Plus

    Reddit

    Hacker News

    20shares

 

Intel Core M/Broadwell-Y chip

 

When Intel announced the details on its 14nm process last year, it raised eyebrows in some circles by claiming some extremely aggressive scaling figures. Put simply, Intel stated that it would deliver a better 14nm process with superior characteristics, die size, and overall efficiency than any competitive product TSMC, its largest foundry competitor, would release on 20nm. This predictably kicked off a PR blizzard between the two companies.

 

Intel stated that it would bring 14nm in with substantial scaling in transistor fin pitch, transistor gate pitch, and interconnect pitch, with a further significant reduction in SRAM scaling. Now, independent analysis and reverse engineering from Chipworks has confirmed that Intel did indeed deliver on its technological promises. Gate pitch has been measured at ~70nm, fin pitch at ~42nm, and a more complex 13-layer metal design. Intel had previously stuck with nine-layer designs before stepping up to 11 for its Bay Trail SoC.

Image courtesy of Chipworks

 

The FinFET transistors of a 14nm Broadwell chip, as seen from above in plan view. [Image credit: Chipworks]

Image courtesy of RealWorldTech.

 

Image courtesy of RealWorldTech. As chip designs shrink, metal layers have become more complicated

 

Metal layers inside a chip are used to connect various features and areas of the chip. As chips have gotten smaller it’s become increasingly difficult to route wires in ways that don’t obviate the increased performance of the transistors themselves. Intel’s decision to step up to a 13-layer design may be partly responsible for Broadwell’s difficulties; the more metal layers you have to connect the more difficult it is to design the chip efficiently.

 

The one potential slip that Chipworks notes is that while Intel claimed a 52nm interconnect pitch, they measured 54nm — but they also say that this is within the margin of measurement error, and that Intel may have simply measured from a different point of the die. They also confirm that Intel hit its SRAM cell target size of 0.058 µm2.

A 14nm Broadwell chip, side-on, showing all 13 layers

 

A 14nm Broadwell chip, side-on, showing all 13 layers

Another shot of the fins of the 14nm Broadwell FinFET transistors

 

Another shot of the fins of the 14nm Broadwell FinFET transistors

What does this mean for Broadwell?

 

So, what’s the big picture mean for Intel’s hardware? It means that I’m more inclined to think that the problems of the Lenovo Yoga 3 Pro are either caused by Lenovo’s design decisions or by power management software. OS level drivers could also be an issue. Accurately hitting its process node targets doesn’t necessarily say anything about the underlying chip — Broadwell might still use more power than Intel projected, for example, or it might not reach target frequencies. It might hit all these metrics but have trouble with yields.

 

At the very least, this data suggests that Intel was playing it straight when it declared its 14nm technology would be a huge step forward and match historic scaling goals. Whether or not Intel can parley those advantages into improving its cost structure and wafer costs is still a very open question. With 450mm wafers on hold and EUV still uncertain, the higher cost at each additional node could still poison any semiconductor manufacturers’ attempts to push to lower process technologies — it’s just not clear when that will happen.

 

Here’s what I suspect it means, strictly speaking for myself: Broadwell may well push down into power envelopes that compete with “little core” products, but the user experience people get will be very dependent on what kind of design choices the OEM makes. An improperly-cooled Broadwell may indeed feel like an Atom. A well-cooled design should be quite a bit stronger. Ultimately, however, Broadwell doesn’t break the laws of physics — and the laws of physics dictate rather strongly that there’s a heat cost for every degree of computation you perform. At a certain point, Broadwell’s “big core” scale-down and Atom’s “little-core” scale up are going to meet and match each other. Reverse Engineering of Chips

5

 

September 16, 2012 by Nakul

 

Author: Nakul Rao I

 

 

 

We have all studied VLSI, or atleast we will. So we know how chips are made, how integrated circuits are constructed. But now there is a new fast emerging field; that of reverse engineering integrated chips. Basically it’s the same as opening up your old toys to find out what exactly is inside it, how does it work, how does it compare to others. The same idea is applied to chips, but the entire process is much more complicated.

 

The first question that may pop into your mind is why anyone would do it. There are several answers to that question.

 

The most important reason is cryptography. A lot of hardware around you is responsible for the safe keeping of information. Mobile phones, smart cards, RFID tags, digital set top box and even your car keys have some aspect of cryptography involved in them at the circuit level. Most manufacturers do not use published standard algorithms such as DES/AES/RSA in their devices as these are considered to be very complex and expensive to be implemented at the circuit level. Instead manufacturers make use of their own proprietary algorithms which are weak. They rely on the fact that these are not published and without knowing the algorithm it is hard to decrypt data. Breaking such algorithms could be worth lot of money. Security algorithms such as Mifare are widely used in public systems for access control; however they have never been published. This was reversed engineered and it was found to be a very weak algorithm. It would be better if someone is able to point out to the customers that the security mechanisms that they trust so much are pretty weak due to negligence on the part of manufacturers, rather than finding it out the hard way. For instance, consider the card swipe type of locks in hotel doors. These can be easily cracked open by an Arduino which is an 8-bit microcontroller.

 

Reverse engineering of chips is also vital in patent infringement cases. How will you find out if a rival company is using your patented technology? It is possible by reverse engineering their chips. Several companies have cropped up that provide such services, the most notable being Flylogic and Chipworks.

 

So how are chips reverse engineered? Anyone familiar with VLSI would know that when chips are designed they form geometric patterns. The different logic cells and elements can be easily identified by their distinctive patterns. So first all the packaging has to be removed and then images are taken of the silicon layer by layer. The plastic and epoxy covering is removed using acetone and fuming nitric acid. That gives the actual chip. A chip can have several layers, the maximum being 13 or 14. Images must be taken of each of these individual layers. When an image of a layer is taken, it must be carefully polished. Polishing removes the last photographed layer and exposes the next layer. This process is continued until images of all the layers are obtained. The different images are then pieced together and various cells are identified from standard cell libraries. This is usually done by software. Finally the entire picture of the chip, its functional blocks and various circuit interconnections are obtained. The chip has been reversed engineered.

 

If a chip can be reverse engineered, it has a big impact on cryptography. If standard algorithms are not used, a hacker can look at the circuit and determine the algorithm. Even worse, he can directly get the data without worrying about the algorithm. Consider a modern day security chip. All data present in it is encrypted, say by a proprietary algorithm. But the ALU can operate only on plaintext. So the chip will include a decryption and an encryption unit. Data is encrypted before being sent to the ALU and its output is encrypted before storing. There are two weak links in the above case. The data can be tapped out when it is being sent from the decryption unit to the ALU and the plaintext is directly obtained. This idea is widely used to crack algorithms.

 

Another problem is with the storage of secret keys. The keys have to be stored within the chips and anyone who is able to reverse engineer these chips has access to the keys.

 

But does this mean that chip manufacturers have lost the war? Have the reverse engineers won? That is not exactly true. Chip manufacturers have come up with several methods to try and protect their chips from being reverse engineered.

 

Reverse engineering is done by removing each layer from the chip. So the manufacturers put a mesh of wiring running all over the chips top layer such that if there is no continuous path, the chip will not work i.e. the starting point and the ending point of the chip has to be connected for the chip to function. This does not provide a lot of security as the mesh layer can be removed and the two points can later be shorted.

 

Another important method is to use glue logic instead of using standard cell libraries. This makes identifying logic cells difficult but not impossible. Decreasing the feature size also helps since the cost of reverse engineering a chip goes up as feature size decreases.

 

Reverse engineering of chips is a very vast field and I have tried to give a basic introduction to this exciting field. There is still much to explore. You can find a lot more information and some amazing images in the following links.

 

www.chipworks.com/

 

www.flylogic.net/

 

www.flylogic.net/blog/ https://en.wikipedia.org/wiki/Reverse_engineering Reverse engineering is applicable in the fields of mechanical engineering, electronic engineering, software engineering, chemical engineering,[2] and systems biology.[3] Obsolescence. Integrated circuits are often designed on proprietary systems, and built on production lines which become obsolete in only a few years. When systems using these parts can no longer be maintained (since the parts are no longer made), the only way to incorporate the functionality into new technology is to reverse engineer the existing chip and then redesign it using newer tools, using the understanding gained as a guide. Another obsolescence originated problem which can be solved by reverse engineering is the need to support (maintenance and supply for continuous operation) existing, legacy devices which are no longer supported by their original equipment manufacturer (OEM). This problem is particularly critical in military operations. Reverse engineering of software can be accomplished by various methods. The three main groups of software reverse engineering are

 

    Analysis through observation of information exchange, most prevalent in protocol reverse engineering, which involves using bus analyzers and packet sniffers, for example, for accessing a computer bus or computer network connection and revealing the traffic data thereon. Bus or network behavior can then be analyzed to produce a stand-alone implementation that mimics that behavior. This is especially useful for reverse engineering device drivers. Sometimes, reverse engineering on embedded systems is greatly assisted by tools deliberately introduced by the manufacturer, such as JTAG ports or other debugging means. In Microsoft Windows, low-level debuggers such as SoftICE are popular.

    Disassembly using a disassembler, meaning the raw machine language of the program is read and understood in its own terms, only with the aid of machine-language mnemonics. This works on any computer program but can take quite some time, especially for someone not used to machine code. The Interactive Disassembler is a particularly popular tool.

    Decompilation using a decompiler, a process that tries, with varying results, to recreate the source code in some high-level language for a program only available in machine code or bytecode. Reverse engineering of integrated circuits/smart cards

Reverse engineering is an invasive and destructive form of analyzing a smart card. The attacker grinds away layer after layer of the smart card and takes pictures with an electron microscope. With this technique, it is possible to reveal the complete hardware and software part of the smart card. The major problem for the attacker is to bring everything into the right order to find out how everything works. The makers of the card try to hide keys and operations by mixing up memory positions, for example, bus scrambling.[27][28] In some cases, it is even possible to attach a probe to measure voltages while the smart card is still operational. The makers of the card employ sensors to detect and prevent this attack.[29] This attack is not very common because it requires a large investment in effort and special equipment that is generally only available to large chip manufacturers. Furthermore, the payoff from this attack is low since other security techniques are often employed such as shadow accounts. It is uncertain at this time whether attacks against CHIP/PIN cards to replicate encryption data and consequentially crack PINS would provide a cost effective attack on multifactor authentication. Most of the top 200 and top 100 applications available for Android users via the Google Play store can be decompiled and reverse engineered, a new report from mobile application security solutions provider SEWORKS claims.

 

According to the security company, 85 percent of the top 200 most popular free applications in the official Android marketplace can be decompiled, and the same applies to 83 of the top 100 paid apps in the store. Since the process is used to reverse engineer software to expose the source code, these applications can become easy targets for exploits, including malware injection and ad fraud.

 

Most worrying is the fact that some of the most popular retail, messaging, photo sharing, and streaming service applications are included in the top 200 Android applications. Furthermore, puzzle, sandbox, and real-time strategy games are among these highly vulnerable applications, the security firm says.

 

The report reveals that 87 percent of the top 100 free game applications in Google Play are decompilable, a list that includes multiplayer, match-3, and real-time strategy installments, along with games based on recent popular movies. 80 percent of the top 100 non-game free applications in the storefront are also decompilable.

 

SEWORKS also found that 95 percent of the top 200 free apps in Google Play can be reverse engineered, and that the same applies to 82 percent of the top 100 paid apps in the marketplace. The researchers used a SaaS-based scanner service to find apps that can be reverse engineered through the apps' shared object library and DEX file with malicious tools available on the Internet.

 

"We are publicizing our findings to warn the industry at large of the dangers they are currently exposed to, which app developers can still fix through relatively simple precautions. Until these protections are put in place, over a billion Android owners are vulnerable due to these decompilable apps," SEWORKS founder and CEO Min-Pyo Hong, said. Can we make something like chip reader, which can understand chip design and generate blueprint of it? It would be possible to design a chip reader that could "reverse-engineer" certain types of chips (e.g. any design one could put in a 16V8, or most designs one could put in a 22V10), but in general there are far too many things a chip could do for one to be confident in any reverse-enginnering effort done through probing alone. Even something like a 22V10 could behave one way until seven precise ten-bit addresses are clocked in, and then start behaving entirely differently. There'd be no way to probe all possible 70-bit address sequences, so one couldn't be sure no features were left out.

 

Yes. There are companies out there that specialize in this. This is done all the time, although it's more of an art than a science. Usually they do some wacky chemical and mechanical etching process to progressively strips off the layers of the chip (like the layers of a PCB)-- taking detailed photos of each layer. Normally, these companies do it to help people like T.I. and Intel figure out why their own chips are failing, but you can bet that there is some illegal uses of this too.

 

Here's an interesting and relevant article that I just ran across: http://www.forbes.com/forbes/2005/0328/068.html

 

And another link: http://www.siliconinvestigations.com/ref/ref.htm Another way to copy a chip design is to emulate its functionality using an FPGA. Many emulations of older chips like the Z80 and 6502 are available. Some students even produced their own version of an ARM device and made it available via the Web, but had to delete it when ARM threatened legal action. You can only implement it in an FPGA after you reversed engineered it. The question is about this reverse engineering. OP doesn't seem to have a datasheet.

 

While reverse engineering of old microchips is feasible with an optical microscope and manual polishing, the challenge is to cleanly strip off layers. For instance, the above picture appears to be an older chip and from the color changes in the background you can see that it has been polished to remove a layer. Typical deprocessing processes involve polishing with specialized polishing/lapping machines or wet chemical etching with more or less dangerous chemicals.

 

However, for more recent chips the process sizes are so small that you will need sophisticated and more expensive equipment such as a plasma etcher, a Scanning Electron Microscope (SEM) or a Focused Ion Beam (FIB). Due to the complexity it is also no longer that easy to extract logic (i.e., the netlist information) from the chip. Today, companies thus use automated tools that typically process the obtained SEM images of chip layers to generate the netlist. The challenge here is deprocess the chip so that deprocessing artifacts are avoided as they would be problematic for any subsequent automated analysis.

 

There are some Youtube videos and conference talks on chip reverse engineering. For instance, in the video here you can see a smaller setup that people could use even at home: https://www.youtube.com/watch?v=r8Vq5NV4Ens

 

On the other hand, there are companies that can do this kind of work with more sophisticated and expensive equipment. In addition to the above mentioned, IOActive has a lab for this kind of work.

 

In the EU there are also companies. For instance on the Trustworks website, you can see a few pictures and some of the necessary lab tools to do this kind of work: https://www.trustworks.at/microchipsecurity. They also appear to have microchip reverse engineering software tools if you specifically look at their "Netlist Extraction and Analysis" section. https://web.archive.org/web/20120228232431/http://www.flylogic.net/blog/ https://web.archive.org/web/20120228232028/http://www.flylogic.net/blog/?p=63 https://web.archive.org/web/20140219030037/http://www.flylogic.net/blog/?p=32 Are there open source projects that completely restore the inner circuitry of modern Intel CPUs? Is it simply possible, or are circuits closed and/or protected by proprietary technology? 1) Smart Imaging Technologies of can reverse to the gate level and output VHDL, but I don't know at which cost. 2) 3D x-ray tomography is a new technique in vogue, and might dispense scraping layers off the chip. Googling it leads to a few papers: iacr.org/archive/ches2009/57470361/57470361.pdf | dforte.ece.ufl.edu/Domenic_files/ISTFA_2015_PCB%20RE-final.pdf Not for modern CPUs. Not even for 10-15 years old CPUs.

 

In 2015 the reverse engineering of Intel 8080 was finished, and this CPU is from 1974 year (actually, Soviet i8080 clone KR580VM80A from 1980s was reversed). Both CPUs were made with 6 μm feature size, so the chip can be photographed using cheap optical microscope.

 

The report in english is here: http://zeptobars.ru/en/read/KR580VM80A-intel-i8080-verilog-reverse-engineering

 

The project was coordinated here (russian): http://zx-pk.ru/printthread.php?t=23349&pp=40

 

Availability of detailed documentation (with block-schemes), low amount of transistors (4758 units), coarse features, single metal layer and readable dopant zones allowed to do this project.

 

Other successful project was MOS 6502 from 1975, 5-16 μm feature size, 3.5 thousands transistors - http://www.visual6502.org/ (They have big collection of chip photos, but they are not reversed to the schematics)

 

One of KR580VM80A reversers reported project about reversing (russian) MIPS R3051 based Playstation 1 CPU made with 0.8 μm (800 nm) feature size in 1995. Project site is http://psxdev.ru/. This CPU has 250 thousands transistors and three layers of metal. In two years after start, good optical photos of chip and all its layers were made (all metals, silicon and dopant), many standard cells were identified, but only multiplier block was rather fully reversed.

 

So, 0.25 million-transistor device is out of reach for amateurs, and modern Intel devices have transistor count of 50 millions in Pentium 3/4 (2000, around 130nm), 50 mln in Atom (2008, 45nm); 200-400 mln in Core 2 (2007, 65-45 nm) and more then 1000 millions in bigger chips like Core i7 (2010, 32nm).

 

    Is it simply possible, or are circuits closed and/or protected by proprietary technology?

 

The circuits and its "sources" (verilog) are proprietary; the software used to convert them into the transistor pattern is proprietary (some by Intel, possibly some by other vendors). And there are not any chance to readout the schematics back from the die (the fabricated chip), because features are too small to be visible in optical microscope; and dopant levels are too low to be read out by even scanning electron microscope (SEM) for the full chip. There just too much information inside the chip (I consider modern photo lithography tools to be most advanced data transfer tools made by mankind; with terabytes per second transferred from photomask into the wafer).

 

For example, the paper Stealthy Dopant-Level Hardware Trojans says:

 

    Also, optical reverse-engineering does not usually allow to detect changes made to the dopant, especially in small technologies. A dedicated setup could eventually allow to identify the dopant polarity. However, doing so in a large design comprising millions of transistors implemented with small technologies seems impractical ...

 

There are several companies which are able to reverse engineer some parts of modern chips, but Intel's CPUs are too big to be fully reversed (this process will have impractical cost both in money and in man- and computer-hours). For example, reversing leader, Chipworks - www.chipworks.com - lists some examples:

 

    Examples of our experience and capabilities

 

    Stand-alone and embedded memory

    Field programmable gate array (FPGA) and other gate arrays

    Analog-to-digital converters (ADC) and digital-to-analog converters (DAC) PLLs and clock generators

    Wired and wireless devices, including transceivers and mixers

    Advanced CMOS microprocessors, graphic chips, DSPs, and microcontrollers

    RFIDs and smartcard chips

    Power semiconductor devices including regulators and high/low power designs

 

But most of their projects were reversing of some small chips (made with not the most advanced technology) or reversing of some parts of chips. They are able of opening chip, making some nice SEM photos of chip cross-section, or optical photos of full die top metals or silicon layer in very coarse resolution (good to measure area of chip or its blocks, but no reversing from this).

 

They sells some photos and reports about Intel chips, for example of Core i5-660: * 200 USD for die photo (top metal?) * 2500 USD for M1 (lower metal) photo * 11000 USD for report of used package * 15-15.5 thousands USD for Layout and Design (DfM) Analysis or Transistor Characterization * 24.5 thousands USD for Structural Analysis Report

 

Some people think, that it will be much cheaper to redevelop the modern CPU than trying to reverse engineer it from the chip. And, possibly, some federal agencies may infiltrate their federal agents into the company to try to steal the CPU sources; but I think that they can get sources to the agent's hands, but will be not able to get them outside the buildings.

shareimprove this answer

answered Feb 17 '15 at 5:45

osgx

21124

 

show 1 more comment

up vote

3

down vote

 

OpenCores is a project aiming at rebuilding the design of usual integrated-circuits with Open Source licenses.

 

One of the sub-project is dedicated to rebuild the i386 architecture, it is called Zet.

 

I don't know how they rebuild the instruction set, there must be a bit of reverse-engineering. But, the specifications given by Intel should be enough (of course, this design will be far from being as efficient as Intel CPUs. Intel did a lot of progress during all these years, it would be difficult to get the same result at the end).

 

But, I don't know if this question is really related to reverse-engineering (yet, I don't know where you should have asked it, so...). http://www.righto.com/2014/05/reverse-engineering-tl431-most-common.html or even reversing aibo! Sony used to sell an outboard SBM A/D converter that seemed like it was just an expensive little black box. You could then feed this digitally into a standard (non-SBM) device. Any know if this little thing is still around? If I had to guess, the model was an SBM-1.

 

 

Recorders

Studer A820

Ampex ATR 104

Headstacks for 1/4” full track, quarter track, NAB & DIN 2 track; 1/2” 2, 3, and 4 track playback. 1” 2 Track on request. Choice of replay electronics, including Plangent Processes

Tascam & Nakamichi cassette decks

Sony PCM-7030 and Tascam DA-45 DAT recorders

Tascam DA-98 DSD recorder, DA-88 compatible

Sony PCM-1630 system

Sony PCM-601 F1 format system

Pro-Ject Xtension 10 Evolution turntable

 

Equalizers

Sontec MES-432C/6

Prismsound Maselec MEA-2

API 5500 and 550m

Pultec EQM-1A3

Weiss EQ-1 MkIII

Z-Sys ZQ-6 surround

Z-Sys ZQ-2 stereo

 

Dynamics

SSL Multichannel compressor

API 2500 compressor

Maselec MPL-2 peak/high frequency limiter/de-esser

Maselec MLA-4 multiband compressor/expander

Fairman TMC compressor

Weiss DS-1 MKII

Waves L2 limiter

 

Maselec MTC-6 analog transfer & mastering console

Z-Systems 32×32 AES Routing Switcher

 

Miscellaneous

TC 6000 w/ Surround Mastering, GML EQ, Unwrap, & Backdrop plug-ins

Lexicon 300 reverb & effects processor

Sony DRE-777 convolution reverb

Weiss SFC-2 sample rate converter

Z-Sys SRC3 sample rate converter

RME-DD192 digital format converter

Z-Sys Z-K6 surround processor

Dolby, DTS & Circle Sound surround processors

 

Workstations & Converters

Workstations

Sonic Studio soundBlade w/ all options

Sony Sonoma w/ SBM-direct DSD>PCM converter, SACD authoring

ProTools 10 & 11

Wavelab

Plugins include

iZotope RX6 audio restoration suite

iZotope Ozone 8

Sonnox Oxford Limiter & Inflator

Wholegrain Systems Quartet & Trio

Weiss Saracon DSD/PCM sample rate conversion

 

Plugins include

iZotope RX6 audio restoration suite

iZotope Ozone 8

Sonnox Oxford Limiter & Inflator

Wholegrain Systems Quartet & Trio

Weiss Saracon DSD/PCM sample rate conversion

 

Converters

Pacific Microsonics HDCD Model 2

Pacific Microsonics HDCD Model 1

Prismsound ADA-8XR 8 channel PCM & DSD converter

Ayre AD9 PCM & DSD A/D converter

Meitner 8 channel DSD D/A converter

Mytek 8×192 8 channel PCM & DSD converter

 

Monitoring

Dunlavy SC-V loudspeakers (FL and FR)

Dunlavy SC-IV (C, LS, RS)

Dual Paradigm Servo-15 subwoofers

Ayre Acoustics amplification

Grace 920 headphone amplifier

Grace 905 monitor controller

Mytek and Sony AES metering

Airshowmastering

Worse than SMC

 

Playback Recorders

Studer A820

Ampex ATR 104

(Headstacks for 1/4” full track, quarter track, NAB & DIN 2 track; 1/2” 2, 3, and 4 track playback. 1” 2 Track on request. Choice of replay electronics, including Plangent Processes)

Tascam & Nakamichi cassette decks

Sony PCM-7030 and Tascam DA-45 DAT recorders

Tascam DA-98 DSD recorder, DA-88 compatible

Sony PCM-1630 system

Sony PCM-601 F1 format system

Pro-Ject Xtension 10 Evolution turntable

 

Monitoring

NHT M-20 monitors and Grado headphones

 

Restoration Tools

Sonic Studio soundBlade w/ NoNoise and Spectral Repair

iZotope RX6 audio restoration suite

iZotope Ozone 8

 

Converters

Pacific Microsonics HDCD Model 2

Pacific Microsonics HDCD Model 1

Prismsound ADA-8XR 8 channel PCM & DSD converter

Ayre AD9 PCM & DSD A/D converter

Meitner 8 channel PCM/DSD converter

Mytek Brooklyn MQA D/A

Mytek 8×192 8 channel PCM/DSD converter

Lavry dB-4496

微妙に音が変わるんですが「良い/悪い」ではなく「好き/嫌い」の範疇の差でしかないです。

もっと低いレートでなら効果が明確になるかもしれませんが,面倒ですので実験は行っていません。

 

高レート圧縮音源がCDクオリティと同等(またはそれ以上)になるか?と言われれれば,そこまでは到達していないと感じます。

耳の痛くなる超高音域の音に差が有りますが,それなりの機器(アンプ+ケーブル+スピーカー)にて聴かない限り分からないレベルと思います。

 

デフォルトが「自動」設定になっているので,精神衛生上,そのままの設定で聴いていますが,今のところ「無くても良い」機能に感じます。ソニーNW-A16CD音源をAACで取り込んで、DSEE HXでアップスケーリングすると、明らかに違いはわかります。音がダイナミックになり、広がりがあるように錯覚します。

 

イヤホンはハイレゾの聴けるものを使っています。

 

ただし、同じ曲のハイレゾと比べると明らかに音質が落ち、聴けば聴くほどCDよりも逆に耳障りなので、私はDSEE HXでアップスケーリングするのはやめました。

 

他のやり方ではどうか知りませんが。劇的な変化はないですね。しかし、注意深く聴き較べていくと、明らかに違いはあると感じました。

ハイレゾ非対応の一般的なイヤホンを使っていますが、DSEE HXでアップコンバートされたmp3は音の広がり、細かい音の明瞭さが上がっていると感じます。その上で音全体の印象がエッジが取れてマイルドに、柔らかく優しくなったなと感じました。逆に、音の分離が良くてクリアになった分、重低音の部分が抜け落ちてボリューム感が低下してしまっています。

だから、色々な曲を聴いてみたんですけど、クラシックとかジャズ、アコースティックな曲なんかにはDSEE HXは合ってるなあと思いましたけど、メタルとか打ち込み系とかの曲ではなんだか迫力が薄いなあとも思いましたね。お上品すぎて。近くでバリバリ演奏してる感覚は乏しい。聴き疲れはしないですけどね。

以上は自分の直感、一個人的な感想なんですけど、似たような事をミュージシャンの人も言っていたので(「ハイレゾは天井が高く感じる〜余韻や音の定位がはっきりわかりやすい〜天井が低いほうがコンプ感が強いのでロックとかそういう音源の場合は、曲によっては44.1kHzなど、ハイレゾじゃない音でも良いかもしれない」)、ほぼ間違いないだろうと思います。

ハイレゾ対応の高性能なイヤホンをお使いと言う事で、もともと再生環境や録音状態が良い音源だと、かえってDSEE HXの効果がわかりにくい点もあるかもしれません。XperiaZ4Z5搭載のヘッドホン自動最適化機能も、安物のイヤホン挿したほうが効果を実感しやすいというレビューもありましたしね。あるいは、人間の聴力の個人差もありますし。mp3192256kbps以上になるとほとんど区別できないそうです。こればかりは一概には言えません。

ソニー標準のミュージックアプリは、イコライザーがフラットの状態だとベースが抑え気味でヴォーカルもこもった感じがしてダメでした。元気がないというか。イコライザーはご自分のぴったりくる適当な位置に動かしてプリセットしたら良いと思います。ClearAudio+は賛否両論あるようですが自分はオフ派です。なんか引き伸ばされて薄っぺらく感じる。

 

ハイレゾ対応のイヤフォン、興味があって家電量販店で各製品試してみたんですが、ことごとく聴こえ方が違うんですよね。「これが正解!」というのがなくて。いくら高級な機種でも合わないものは合わない。結局、ハイレゾ対応であろうがなかろうが好みの音が鳴るものを選べばいいじゃん、という結論に達しました。どちらを選ばないのがいいです

何故ならそれが1番原音に近いからです

DSEE HXCD音源や圧縮音源の低音部分と高音部分を勝手に補間してハイレゾ音源相当にするというものです

一瞬データ量がハイレゾ並みに増えるので、ハイレゾ相当にはなりますが、ありもしないデータを勝手に補間するので、原音からは遠ざかります

clearaudio+は簡単にいうとSONYが作ったオススメのイコライザー設定です

楽曲をいじってしまっているので、これも、原音からは遠ざかっています

何も設定を変えないほうがいいです dsee HXを使うと、今まで気づかなかった音に気づくのですが、offにして聞いてみると、その音は元から聞こえていたのがわかりました。

 

実際違いがわかりません。DEEE-HXは非可逆圧縮音源とCD音源(ロスレス)を24bit/192kHz相当にビット拡張とアップスケーリングして高域補正をするものですが、この疑似ハイレゾ化の方法について一番疑問なのが非可逆圧縮によって削除されたデータが補間されるのかという部分です。また今まで聴こえなかったような音が聴こえるのはハイレゾ音源の特徴ですが非可逆圧縮の場合はこのマスキング効果によって聞こえない音は削除されているので補間する方法は無いと思います。事実、JVC K2HDはCD音源をハイレゾ相当にアップコンバートすることは行いますが、非可逆圧縮はデータ補間はしません。したがってDSEE-HXの場合も非可逆圧縮でビット拡張とアップスケーリングを行っても補間されるものはほとんどなくノイズを付加して高域の補正を行う効果のみしか感じないのは至極当然ではないでしょうか。

実際にCDをFLACALACWAVなどでリッピングしたものでは効果は確認できます。私の場合はあまり好みではないですが、音質は明らかに変わって聴こえますよ。多少ハイ上がりになるので私は好きではありませんが、録音の良いCDでは一つ一つの音が明瞭になり、細かい音がしっかりと表現されますがビット拡張とアップスケーリングによるデータ補間とはないものを付け加えるわけではないので聴きとり難かったものがハッキリと聞こえるようになったという程度のものなのです。

これはCD音源とハイレゾ音源でも同様ですが、CD音源で録音の良くないものはハイレゾ音源を買ってもやはり音は良くないのと感じます。他の録音の良いと感じるCDをロスレスでリッピングして比較をされることをお勧めいたします。低域の補正はされませんか?

低音も聞き取りやすくなったと感じたところがあったのですが、気のせいでしょうか?低音が聴き易くなったのはビット拡張とアップスケーリングによってデータ補間されているからです。非可逆圧縮の場合には16kHz以上の音がカットされているのでノイズを付加して高域補正をします。またCD音源(ロスレス)の場合には22.05kHz以上の高域を付加する形になっています。

低音は付加するのではなく疑似ハイレゾ化(ビット拡張とアップスケーリング)することによってデータ補間されたので解像度が上がっているからではないでしょうか。同じ曲(演奏者、歌手)をハイレゾとCDで比較して下さい。

DSEE HXは、アップスケーリングしてませんよ。

物理的に、16ビットの256倍が24ビットですよ。ですから、情報の無い部分を補完するなんて無理ですよ。

また、正確にはDSEE HXでアップスケーリングされるとは記載されていません。"ハイレゾのような"と記載されています。

良く読みましょうね。

 

結果、DSEE HXは未使用ですよ。DSEE HXは個人的には音が硬くなる感じがします。

高音域を補完したことにより、キンキンする感じがします。正直あまりナチュラルではありません。ハイレゾの柔らかさが感じられません。

ハイレゾ補完系であればVictorK2HDの方がまだハイレゾに近いです。

ですが、やはり本物にはかないません。A10を使ってる者です。DSEE HXはほんとに驚くほど音質がよくなります。DSEE HXをオフにして聴いてみると違いがわかるんですけど、DSEE HX使ってないとほんとにカッスカスに聞こえます。一回DSEE HXオンで聞くともう戻れません!凄いいい機能ですよ、是非試して欲しいです。感動すると思いますそうなんですか!ありがとうございます!

ちなみに、イヤホンは何を使用してますか?MDR-EX750です〜shitメーカーサイトを見れば「CD音源や圧縮音源をハイレゾ相当の高解像度音源にアップスケーリングする」のが DSEE HX なので、もちろん効果はあるはずです。

ただし、レコーディング時の機材、曲調によって効果が高いものとたいして違いが感じられないものがあるということが、この記事からわかります。

http://av.watch.impress.co.jp/docs/series/dal/20131216_627811.html

 

個人的には、外に持ち出して聴くのであれば、ノイズキャンセリングを入れたとしても周囲のノイズを完全に遮断できるわけでもないだろうし、無理にバッテリー消費を大きくしてまでONにしても違いを感じられるとは思えませんけどね。

私は Walkman ではノイズキャンセリングも切ってます。

電車乗り過ごしたりするから() 使うならDSEE HXですかね。

 

clearaudio+は音をがらんと変えてしまうので使いません。

 

 

ただ私は基本的に音が不自然になるのが嫌なので、普段はこのような音響効果一切使いません。どちらも使っていません。

ClearAudio+はかなり変化して不自然ですし

DSEE HXも違和感があるので嫌いです。DSSEは高音質にする技術ではないですね。圧縮音源をハイレゾ風に加工する機能です。ハイレゾ音源は情報量が多い=音が細かい=音が細く、弱い方が細かく聞こえる?ということだと思います。DSSEをオンにすると特に高音部に変なデジタルエフェクトを感じますね。音を震わせているみたいな感じがします。確認したらmp3でも32k以上まで出てました…

DSEE HX恐ろしや… like my realtek chip noise!

Comments

Popular posts from this blog

Vegetable People Practical

Single Text Essay Structure

COMPLETED VERSION Charles Perkins and the 1965 Australian Freedom Ride