Quantcast
Channel: Visual C++ Team Blog
Viewing all 437 articles
Browse latest View live

Developing Windows Applications in C++ (Articles Series)

$
0
0

Kate GregoryWe got just published “Developing Windows Applications in C++”,  an article series created by C++ Most Valuable Professional Kate Gregory. This material is aimed at C++ developers who want to learn how to write Windows applications. It won’t teach you C++, or how to use Windows as an end user. But if you know C++ and Windows already, and want to write applications, this is the right spot. It’s divided into 6 chapters.

 

  1. The tools you need
    In this chapter you will learn about the Windows SDK and how to get it. Visual Studio will be introduced and the different versions explained. All the demos and samples in this material were developed using Visual Studio 2010 Express, which is available at no charge.
  2. Windows basics
    Here you will see how Windows works under the hood. You’ll meet words like “message” and “handle” that are everywhere for Windows programmers. Windows terminology can be different from what you’re used to, so this chapter clarifies a lot of it. You’ll also see the various ways that Windows interacts with your application and provides functionality for you.
  3. A simple Windows application
    In this chapter, you’ll meet the simplest-possible Windows application and understand its structure, and how it works. These concepts will recur in every Windows application you write. You’ll also see the starter application that is generated by Visual Studio, and how it differs from the simplest-possible application.
  4. Typical Windows tasks
    This chapter starts by transforming the starter application from chapter 3 from a C-style collection of functions into a more object-oriented C++ application. Then it uses Direct2D to paint a simple UI. The sample is further refined by adding reactions to mouse and keyboard events, and then a control is added. You can also see how to show a message box to your users. This gives you all the building blocks for a user interface.
  5. Working with COM
    A lot of Windows functionality is provided through COM, the Component Object Model. This chapter focuses on consuming functionality offered by Windows through COM. It covers COM concepts, demonstrates calling the Text-to-Speech capabilities of Windows through COM, and explains many of the coding patterns and conventions you will see in almost every application that uses COM.
  6. Taking the next steps
    This chapter calls out nine other Windows-related technologies that might interest you, and provides links to starting points you can use to explore those. With the foundation provided by the first six chapters, you can start to incorporate many other technologies into your own Windows applications.

GoingNative: a New Channel 9 Show Dedicated to Native Development

$
0
0

imageGoingNative is a new, monthly show on Channel 9 dedicated to native development and native developers, with an emphasis on modern C++. In our inaugural episode, we keep things light and easy as we introduce you to what we're doing, why we're doing it, and how it will go down.

The main goal of episode 0 is to introduce the cast of characters, including your hosts Charles Torre and Diego Dagum, and to present some ideas of how we think this show will be organized and executed. For example, Diego and Charles will typically construct the show, iterate through some code demos of varying complexity, converse with native developers from Microsoft and across the industry, and then destruct the show. 
In this first episode we do talk about and demo a few new C++ features (shared_ptr, lambdas, auto) and have a conversation with Ale Contenti - development manager of VC's front-end compiler, libraries, and IDE.

[You can play around with the demos in this episode by downloading the free VC++ Express IDE]


Table of Contents (click time code links to navigate player accordingly)

[00:09] Charles and Diego construct the show and talk about modern C++ (how 'bout that set, eh?)
[07:27] Diego demos shared_ptr
[10:01] Charles and Diego chat briefly about C++ lambdas
[10:32] Diego demos lambdas
[12:13] Charles and Diego chat briefly about C++ auto keyword (seen in the lambdas demo)
[13:30] Charles and Diego talk about the audience and how you can help us fly this plane
[15:32] Charles interviews Ale Contenti
[26:35] Charles and Diego destruct the show ( it won't usually take this long Smiley )

 

Go native!

First Look at the New C++ IDE Productivity Features in Visual Studio 11

$
0
0

Sumit Kumar

Hi! I am Sumit Kumar, a Program Manager on the Visual C++ team.

Today I will talk to you about some of the exciting new IDE functionality in the next version of Visual Studio that will make you, the C++ developer, more productive with your daily code focused tasks. In this blog post you will get a preview of the new features that help with code understanding and editing. There will be more blog posts talking about other cool new features.

 

 

Code Understanding enhancements

Semantic Colorization

Semantic Colorization helps you quickly scan the code and infer more semantic meaning through enhanced visual feedback in the editor. In addition to the keywords, strings and comments, now other tokens like types, enumerations and macros are colorized; the parameters are in italics and so on. The screenshot below shows an example. Notice how the macros, types, function parameters etc. pop out and make understanding code so much easier.

Semantic colorization

While there are only a few tokens that are colorized differently by default, around twenty different semantic tokens are exposed to the users as shown in the screenshot below.

Fonts and colors

You can customize your IDE to colorize these tokens differently. For example, you could choose to colorize local and global variables differently which could be a handy source understanding aid when the variables are identically named but defined in different scopes.

 

Reference Highlighting

Another great productivity feature that aids you in understanding code is Reference Highlighting. When you place your text cursor on a symbol, all the instances of that symbol in the file get highlighted. Only the true references are highlighted – for example, two symbols with same names in different scopes (say local vs. global) will not be highlighted at the same time. You can use Ctrl+Shift+Up and Ctrl+Shift+Down keys to move between the highlighted references. This means that you no longer have to invoke Find All References if you are simply looking for symbols within a file. The screenshot below shows how all the instances of the variable cxExtentMax inside the function body are highlighted when the cursor is placed on the one referenced in the call to max(). But the variable with same name defined outside the function scope is not highlighted.

Reference highlighting

 

New Solution Explorer

There are a number of tool windows needed for common everyday tasks – for example, Navigate To is used for searching symbols and files, Class View and Object Browser are used for inspecting the members of an object, Find All References is used for, well, finding references, Call Hierarchy is used for finding the calls to and from a function etc. Imagine being able to do all of these operations from a single tool window without having to switch context or sacrifice additional precious screen real-estate. The new Solution Explorer combines most of the functionality of these tool windows into one place, itself! Of course, the other tool windows will still be available in Visual Studio, but the goal of the new Solution Explorer is to significantly reduce the need to invoke them for the most common scenarios. A detailed description of all of the new functionality provided by the versatile new Solution Explorer is a separate blog topic in itself, but here is a sampling:

You can expand your files to see the fields, functions and types contained in the files and the members contained in the types.

New Solution Explorer

It allows you to search your entire solution all the way to the members of individual classes.

New Solution Explorer

You can navigate back and forward between different views of the Solution Explorer and can create multiple instances of Solution Explorer rooted at different nodes if needed. You can also scope the view to just a specific project or file or type.

New Solution Explorer

The view in editor automatically syncs with the view in Solution Explorer. Clicking on a symbol node in the Solution Explorer takes you to the definition of that symbol in the editor. You can also see the relationship between functions such as Calls to, Calls from, References, and Inheritance for functions and types from within the Solution Explorer.

New Solution Explorer

New Solution Explorer

 

 

Code Editing enhancements

The second category of C++ features helps you with editing code faster.

Automatic Display of IntelliSense Member List

In Visual Studio 2010 and previous releases, the IntelliSense member list dropdown had to be explicitly invoked either by typing Ctrl+Space or Ctrl+J or entering a scope resolution operator (::) or element selection operator (. or ->). In the next version, Visual Studio will automatically shows the member list drop down as you type without the need to explicitly invoke it.

Automatic Display of IntelliSense Member List

The automatic display of member list is smart – it does not aggressively display the member list when it does not make sense, for example when typing a declaration, there is no aggressive display of member list.

Automatic Display of IntelliSense Member List

 

Member List Filtering

Not only is the member list displayed automatically, it is also filtered as you type to shrink and show only the relevant members. So you can get a filtered result like the screenshot below just by typing two characters

Member List Filtering     Member List Filtering

Notice that pb is not a prefix or even a substring of the members in the list. The filtering uses a fuzzy logic to find the relevant members quickly. But if you do not like the fuzzy filtering, you can change it to prefix based, or prefix plus camel casing based or turn off the filtering completely.

 

Code Snippets

Code Snippets help you quickly type the boilerplate code with just a couple of keystrokes. Here’s how it works for a switch statement: as you start typing, the IntelliSense member list shows you the relevant code snippet that can be selected by pressing tab.

Code Snippets

Code Snippets

Then modify the expression in the switch statement or just press Enter and the entire skeleton of the switch statement is added for you; you only need to fill in the logic!

Code Snippets

In addition to the switch statement, there are other snippets for basic code constructs available to you – like if-else, for loop, etc. Each of the snippets saves you from unnecessary typing and lets you focus more on your logic, adding up to significant productivity gains over time!

Additionally, the code snippets feature is extensible so you can also create your own snippets, which is as simple as creating a simple XML file and copying it at a certain location. You can also invoke the code snippets from the context menu in the editor and can either insert a snippet or can surround a selection of code with a code snippet (for example with a #ifdef statement).

 

 

Summary

Many of these code understanding and editing features were requested by you, and are squarely intended to make you more productive with C++ development. Your continued feedback will help us make these features better before they ship. Please note that the descriptions and screenshots are from our early internal builds. These features are still under development and could potentially change or not even be included in our final product. In addition to these new features, we have done a lot of work on the IDE but I will save those for future blog posts.

 

[Keep learning about new features for C++ developers in Visual Studio 11]

C9::GoingNative: Visual C++ Upcoming IDE Demos, a CRT Talk and More!!

$
0
0

Click to watch the episode in Channel 9Greetings! Charles Torre and I came back with the second episode of Channel 9 Going Native –which is actually episode 1, considering that C++, as a C-like language, starts indexing from 0.

This time we follow up from a recent article that my colleague and friend Sumit Kumar (Program Manager in the Visual C++ team) wrote last week about new IDE features and enhancements that we plan to ship in the next version. You’ll see me searching and finding project assets (source files, classes, etc.) fast with the new Solution Explorer. You will also see the enhanced coloring of source file and how much comprehensible your code looks this way. You’ll also see the more proactive IntelliSense (using some fuzzy logic to quickly filter the list as you type the initials of method names like GetMaxHeight() or get_max_height() –whichever your naming convention). Possibly the best of all, the addition of long claimed code snippets to C++ development in Visual Studio.

Then we picked one of the many topics you guys submitted for us to cover, so we visited Mahmoud Saleh, Visual C++ team engineer. He told us about the C Runtime Library (CRT) and the role itplays related to  the C/C++ compiler, the linker and the underlying Win32 APIs. In most of cases we associate the CRT with the implementation of printf() and similar functions, without realizing that even our main(int, char**) function –which we tend to believe that it’s the entry point of our application- it’s actually delegated the control by this “invisible buddy” (the fact that we aren’t aware at most of times about its omnipresence is part of its success: we just don’t need to worry about). Conversely, the executable version of your app doesn’t stop working in the closing bracket (“}”) of your main(), as Mahmoud will tell what happens after the end. Mahmoud also talks about how CRT takes care about memory (like static initializers), security (buffer overruns, etc.) and exception handling, among other duties. You’ll also review the different ways of linking to the CRT from your application, with their pros and cons.

In the last segment, Charles describes his experience at C++ and Beyond, the conference on C++ programming hosted by three luminaries: Scott Meyers (Effective C++, More Effective C++, and Effective STL), Andrei Alexandrescu (Modern C++ Design, C++ Coding Standards –coauthor- and The D Programming Language) and (our) Herb Sutter (Exceptional C++, More Exceptional C++, Exceptional C++ Style and the mentioned C++ Coding Standards). But he doesn’t provide just his impressions: he shows us what a few fellow attendees had to say about the event and the speakers.

[Watch episode 1 on Channel 9]

First Look at the New C++ IDE Productivity Features in Visual Studio 11 (Cont’d…)

$
0
0

Hi! I am Amit Mohindra, a Program Manager on the Visual C++ team. Earlier in the month my colleague Sumit Kumar had talked about some of the exciting IDE features in the next version of Visual Studio. Today I will just add to that list a few more features that we have done in the next version of the IDE to help you be more productive.

 

 

Project Compatibility = No Upgrade (Yay!)

Over the years one of the pain points for our customers has been the upgrade cost associated with a new release of Visual Studio . The cost associated with upgrade from Visual Studio 2008 to Visual Studio 2010 especially for Visual C++ customers was steep because of breaking compiler changes and a totally new project and build system based on MSBuild.

In Visual Studio 11 we have eliminated this cost for you by supporting project (asset) compatibility between Visual Studio 2010 and Visual Studio 11. What this means is that you will be able to open and work (build etc.) in Visual Studio 11 with your Visual Studio 2010 projects without needing to upgrade your project files.

The upgrade wizard is gone. Smile

The upgrade wizard is gone

The upgrade wizard does not pop-up when you load a Visual Studio 2010 project in Visual Studio 11. It just loads Smile. You can build the project using Visual Studio 2010 tools in Visual Studio 11 using the multi targeting feature, which I will explain more below. This means while you adapt to using the new compiler and your 3rd parties vendors provide you with binaries compatible with the Visual Studio 11 compiler you can still leverage the new Visual Studio 11 IDE without disrupting your ship cycle. Just set the platform toolset property to v100 in the property pages (requires Visual Studio 2010 to be installed side-by-side with Visual Studio 11). Since there is no upgrade while you are using the Visual Studio 2010 tools (v100) to build you can continue load the project/solution in Visual Studio 2010 as well.

Setting the platform toolkit

You might be thinking “wait!!!!” if I have more than 200 native projects I don’t want to be changing the platform toolset property for each project.

We have solved that problem in Visual Studio 11 by providing a way for you to update your toolset right from the solution file. Right clicking on your solution file brings up a context menu from which you can choose to update the toolset for all your native projects.

Massive update of the toolset property

Don’t get scared by the word “Upgrade”, this option only updates the toolset property for each of your native projects in the solution to use Visual Studio 11 build tools (v110). Note that we are actively working on improving the usability experience around this feature.

Note this feature is just not limited to solutions with just Visual C++ project but is valid for mixed solutions as well which might contain some Visual C++ projects and say some C# projects. In the coming weeks you will hear more about this.

 

Support for Visual Studio Templates (VSTemplates)

Visual Studio 2010 and previous releases for C++ offered a way of creating project templates using an old .vsz/.vsdir format. This format is quite old and does not support the functionality of publishing the templates on the extension gallery for other customer to download and consume. Visual Studio 11 for C++ supports the ”vstemplates” format for authoring your custom project and item templates that will allow for any new templates to leverage the infrastructure to publish the templates online on the extension gallery.

You could either hand author your custom “vstemplate” files using the schema or if you are working on a project you could use Visual Studio to export it to a template as shown below.

Authoring templates in Visual Studio

Just fill out the details for your template and click finish in the wizard.

Authoring templates in Visual Studio

Now go to File->New->Project in Visual Studio 11 and your template shows up.

Authoring templates in Visual Studio

Once you have a template you can also upload the template to the extension gallery by creating a VSIX. Instructions on how to create a VSIX are listed here.

You can learn more on VSTemplates here.

 

 

Summary

Both of the features above have been requested by you over the years and we are bringing these to you in the next release of Visual Studio. Please share your feedback regarding these features and we will strive to make them better. Please note that the descriptions and screenshots are from our early internal builds. These features are still under development and could potentially change or not even be included in our final product.

C++11 Features in Visual C++ 11

$
0
0

There's a new C++ Standard and a new version of Visual C++, and it's time to reveal what features from the former we're implementing in the latter!

Terminology notes: During its development, the new C++ Standard was (optimistically) referred to as C++0x.  It's finally being published in 2011, and it's now referred to as C++11.  (Even International Standards slip their release dates.)  The Final Draft International Standard is no longer publicly available.  It was immediately preceded by Working Paper N3242, which is fairly close in content.  (Most of the people who care about the differences are compiler/Standard Library devs who already have access to the FDIS.)  Eventually, I expect that the C++11 Standard will be available from ANSI, like C++03 is.

As for Visual C++, it has three different version numbers, for maximum fun.  There's the branded version (printed on the box), the internal version (displayed in Help About), and the compiler version (displayed by cl.exe and the _MSC_VER macro - this one is different because our C++ compiler predates the "Visual" in Visual C++).  For example:

VS 2005 == VC8 == _MSC_VER 1400
VS 2008 == VC9 == _MSC_VER 1500
VS 2010 == VC10 == _MSC_VER 1600

The final branding for the new version hasn't been announced yet; for now, I'm supposed to say "Visual C++ in Visual Studio 11 Developer Preview".  Internally, it's just VC11, and its _MSC_VER macro is 1700.  (That macro is of interest to people who want to target different major versions of VC and emit different code for them.)  I say VC10 and VC11 because they're nice and simple - the 11 in VC11 does not refer to a year.  (VS 2010 == VC10 was a confusing coincidence.)

If you read C++0x Core Language Features In VC10: The Table last year, the following table will look familiar to you.  This time, I started with GCC's table again, but I reorganized it more extensively for increased accuracy and clarity (as many features went through significant revisions):

 

C++11 Core Language Features VC10 VC11
Rvalue references v0.1, v1.0, v2.0, v2.1, v3.0 v2.0 v2.1*
ref-qualifiers No No
Non-static data member initializers No No
Variadic templates v0.9, v1.0 No No
Initializer lists No No
static_assert Yes Yes
auto v0.9, v1.0 v1.0 v1.0
Trailing return types Yes Yes
Lambdas v0.9, v1.0, v1.1 v1.0 v1.1
decltype v1.0, v1.1 v1.0 v1.1**
Right angle brackets Yes Yes
Default template arguments for function templates No No
Expression SFINAE No No
Alias templates No No
Extern templates Yes Yes
nullptr Yes Yes
Strongly typed enums Partial Yes
Forward declared enums No Yes
Attributes No No
constexpr No No
Alignment TR1 Partial
Delegating constructors No No
Inheriting constructors No No
Explicit conversion operators No No
char16_t and char32_t No No
Unicode string literals No No
Raw string literals No No
Universal character names in literals No No
User-defined literals No No
Standard-layout and trivial types No Yes
Defaulted and deleted functions No No
Extended friend declarations Yes Yes
Extended sizeof No No
Inline namespaces No No
Unrestricted unions No No
Local and unnamed types as template arguments Yes Yes
Range-based for-loop No No
override and final v0.8, v0.9, v1.0 Partial Partial
Minimal GC support Yes Yes
noexcept No No

 

C++11 Core Language Features: Concurrency VC10 VC11
Reworded sequence points N/A N/A
Atomics No Yes
Strong compare and exchange No Yes
Bidirectional fences No Yes
Memory model N/A N/A
Data-dependency ordering No Yes
Data-dependency ordering: function annotation No No
exception_ptr Yes Yes
quick_exit and at_quick_exit No No
Atomics in signal handlers No No
Thread-local storage Partial Partial
Magic statics No No

 

C++11 Core Language Features: C99 VC10 VC11
__func__ Partial Partial
C99 preprocessor Partial Partial
long long Yes Yes
Extended integer types N/A N/A

 

Here's a quick guide to this table, but note that I can't explain everything from scratch without writing a whole book, so this assumes moderate familiarity with what's in C++11:

Rvalue references: N1610 "Clarification of Initialization of Class Objects by rvalues" was an early attempt to enable move semantics without rvalue references.  I'm calling it "rvalue references v0.1", as it's of historical interest only.  It was superseded by rvalue references v1.0, the original wording.  Rvalue references v2.0, which is what we shipped in VC10 RTM/SP1, prohibits rvalue references from binding to lvalues, fixing a major safety problem.  Rvalue references v2.1 refines this rule.  Consider vector<string>::push_back(), which has the overloads push_back(const string&) and push_back(string&&), and the call v.push_back("meow").  The expression "meow" is a string literal, and it is an lvalue.  (All other literals like 1729 are rvalues, but string literals are special because they're arrays.)  The rvalue references v2.0 rules looked at this and said, string&& can't bind to "meow" because "meow" is an lvalue, so push_back(const string&) is the only viable overload.  This would create a temporary std::string, copy it into the vector, then destroy the temporary std::string.  Yuck!  The rvalue references v2.1 rules recognize that binding string&& to "meow" would create a temporary std::string, and that temporary is an rvalue.  Therefore, both push_back(const string&) and push_back(string&&) are viable, and push_back(string&&) is preferred.  A temporary std::string is constructed, then moved into the vector.  This is more efficient, which is good!  (Yes, I'm ignoring the Small String Optimization here.)

The table says "v2.1*" because these new rules haven't been completely implemented in the VC11 Developer Preview.  This is being tracked by an active bug.  (Indeed, this is a Standard bugfix.)

Rvalue references v3.0 adds new rules to automatically generate move constructors and move assignment operators under certain conditions.  This will not be implemented in VC11, which will continue to follow VC10's behavior of never automatically generating move constructors/move assignment operators.  (As with all of the not-yet-implemented features here, this is due to time and resource constraints, and not due to dislike of the features themselves!)

(By the way, all of this v0.1, v1.0, v2.0, v2.1, v3.0 stuff is my own terminology, which I think adds clarity to C++11's evolution.)

Lambdas: After lambdas were voted into the Working Paper (v0.9) and mutable lambdas were added (v1.0), the Standardization Committee overhauled the wording, producing lambdas v1.1.  This happened too late for us to implement in VC10, but we've already implemented it in VC11.  The lambdas v1.1 wording clarifies what should happen in corner cases like referring to static members, or nested lambdas.  This fixes a bunch of bugs triggered by complicated lambdas.  Additionally, stateless lambdas are now convertible to function pointers in VC11.  This isn't in N2927's wording, but I count it as part of lambdas v1.1 anyways.  It's FDIS 5.1.2 [expr.prim.lambda]/6: "The closure type for a lambda-expression with no lambda-capture has a public non-virtual non-explicit const conversion function to pointer to function having the same parameter and return types as the closure type’s function call operator. The value returned by this conversion function shall be the address of a function that, when invoked, has the same effect as invoking the closure type’s function call operator."  (It's even better than that, since we've made stateless lambdas convertible to function pointers with arbitrary calling conventions.  This is important when dealing with APIs that expect __stdcall function pointers and so forth.)

decltype: After decltype was voted into the Working Paper (v1.0), it received a small but important bugfix at the very last minute (v1.1).  This isn't interesting to most programmers, but it's of great interest to programmers who work on the STL and Boost.  The table says "v1.1**" because this isn't implemented in the VC11 Developer Preview, but the changes to implement it have already been checked in.

Strongly typed/forward declared enums: Strongly typed enums were partially supported in VC10 (specifically, the part about explicitly specified underlying types), and C++11's semantics for forward declared enums weren't supported at all in VC10.  Both have been completely implemented in VC11.

Alignment: Neither VC10 nor VC11 implement the Core Language keywords alignas/alignof from the alignment proposal that was voted into the Working Paper.  VC10 had aligned_storage from TR1.  VC11 adds aligned_union and std::align() to the Standard Library.

Standard-layout and trivial types: As far as I can tell, the user-visible changes from N2342 "POD's Revisited; Resolving Core Issue 568 (Revision 5)" are the addition of is_trivial and is_standard_layout to <type_traits>.  (N2342 performed a lot of surgery to Core Language wording, but it just makes stuff well-defined that users could have gotten away with anyways, hence no compiler changes are necessary.)  We had these type traits in VC10, but they just duplicated is_pod, so I'm calling that "No" support.  In VC11, they're powered by compiler hooks that should give accurate answers.

Extended friend declarations: Last year, I said that VC10 partially supported this.  Upon closer inspection of N1791, I've determined that VC's support for this is essentially complete (it doesn't even emit "non-Standard extension" warnings, unlike some of the other Ascended Extensions in this table).  So I've marked both VC10 and VC11 as "Yes".

override and final: This went through a short but complicated evolution.   Originally (v0.8) there were [[override]], [[hiding]], and [[base_check]] attributes.  Then (v0.9) the attributes were eliminated and replaced with contextual keywords.  Finally (v1.0), they were reduced to "final" on classes, and "override" and "final" on functions.  This makes it an Ascended Extension, as VC already supports this "override" syntax on functions, with semantics reasonably close to C++11's.  "final" is also supported, but under the different spelling "sealed".  This qualifies for "Partial" support in my table.

Minimal GC support: As it turns out, N2670's only user-visible changes are a bunch of no-op Standard Library functions, which we already picked up in VC10.

Reworded sequence points: After staring at N2239's changes, replacing C++98/03's "sequence point" wording with C++11's "sequenced before" wording (which is more useful, and more friendly to multithreading), there appears to be nothing for a compiler or Standard Library implementation to do.  So I've marked this as N/A.

Atomics, etc.: Atomics, strong compare and exchange, bidirectional fences, and data-dependency ordering specify Standard Library machinery, which we're implementing in VC11.

Memory model: N2429 made the Core Language recognize the existence of multithreading, but there appears to be nothing for a compiler implementation to do (at least, one that already supported multithreading).  So it's N/A in the table.

Extended integer types: N1988 itself says: "A final point on implementation cost: this extension will probably cause no changes in most compilers. Any compiler that has no integer types other than those mandated by the standard (and some version of long long, which is mandated by the N1811 change) will likely conform already."  Another N/A feature!

That covers the Core Language.  As for the Standard Library, I don't have a pretty table of features, but I do have good news:

In VC11, we intend to completely support the C++11 Standard Library, modulo not-yet-implemented compiler features.  (Additionally, VC11 won't completely implement the C99 Standard Library, which has been incorporated by reference into the C++11 Standard Library.  Note that VC10 and VC11 already have <stdint.h>.)  Here's a non-exhaustive list of the changes we're making:

New headers: <atomic>, <chrono>, <condition_variable>, <future>, <mutex>, <ratio>, <scoped_allocator>, and <thread>.  (And I've removed the broken <initializer_list> header that I accidentally left in VC10.)

Emplacement: As required by C++11, we've implemented emplace()/emplace_front()/emplace_back()/emplace_hint()/emplace_after() in all containers for "arbitrary" numbers of arguments (see below).  For example, vector<T> has "template <typename... Args> void emplace_back(Args&&... args)" which directly constructs an element of type T at the back of the vector from an arbitrary number of arbitrary arguments, perfectly forwarded.  This can be more efficient than push_back(T&&), which would involve an extra move construction and destruction.  (VC10 supported emplacement from 1 argument, which was not especially useful.)

Faux variadics: We've developed a new scheme for simulating variadic templates.  Previously in VC9 SP1 and VC10, we repeatedly included subheaders with macros defined differently each time, in order to stamp out overloads for 0, 1, 2, 3, etc. arguments.  (For example, <memory> included the internal subheader <xxshared> repeatedly, in order to stamp out make_shared<T>(args, args, args).)  In VC11, the subheaders are gone.  Now we define variadic templates themselves as macros (with lots of backslash-continuations), then expand them with master macros.  This internal implementation change has some user-visible effects.  First, the code is more maintainable, easier to use (adding subheaders was a fair amount of work), and slightly less hideously unreadable.  This is what allowed us to easily implement variadic emplacement, and should make it easier to squash bugs in the future.  Second, it's harder to step into with the debugger (sorry!).  Third, pair's pair(piecewise_construct_t, tuple<Args1...>, tuple<Args2...>) constructor had "interesting" effects.  This requires N^2 overloads (if we support up to 10-tuples, that means 121 overloads, since empty tuples count here too).  We initially observed that this (spamming out so many pair-tuple overloads, plus all of the emplacement overloads) consumed a massive amount of memory during compilation, so as a workaround we reduced infinity.  In VC9 SP1 and VC10, infinity was 10 (i.e. "variadic" templates supported 0 to 10 arguments inclusive).  In the VC11 Developer Preview, infinity is 5 by default.  This got our compiler memory consumption back to what it was in VC10.  If you need more arguments (e.g. you had code compiling with VC9 SP1 or VC10 that used 6-tuples), there's an escape hatch.  You can define _VARIADIC_MAX project-wide between 5 and 10 inclusive (it defaults to 5).  Increasing it will make the compiler consume more memory, and may require you to use the /Zm option to reserve more space for PCHes.

This story has a happy ending, though!  Jonathan Caves, our compiler front-end lord, investigated this and found that something our tuple implementation was doing (specifically, lots of default template arguments), multiplied by pair's N^2 overloads, multiplied by how much pair tends to get used by STL programs (e.g. every map), was responsible for the increased memory consumption.  He fixed that, and the fix is making its way over to our STL branch.  At that point, we'll see if we can raise the _VARIADIC_MAX default to 10 again (as I would prefer not to break existing code unnecessarily).

Randomness: uniform_int_distribution is now perfectly unbiased, and we've implemented shuffle() in <algorithm>, which directly accepts Uniform Random Number Generators like mersenne_twister.

Resistance to overloaded address-of operators: C++98/03 prohibited elements of STL containers from overloading their address-of operator.  This is what classes like CComPtr do, so helper classes like CAdapt were required to shield the STL from such overloads.  During VC10's development, while massively rewriting the STL (for rvalue references, among other things), our changes made the STL hate overloaded address-of operators even more in some situations.  (You might remember one of my VCBlog posts about this.)  Then C++11 changed its requirements, making overloaded address-of operators acceptable.  (C++11, and VC10, provide the helper function std::addressof(), which is capable of getting the true address of an object regardless of operator overloading.)  Before VC10 shipped, we attempted to audit all STL containers for occurrences of "&elem", replacing them with "std::addressof(elem)" which is appropriately resistant.  In VC11, we've gone further.  Now we've audited all containers and all iterators, so classes that overload their address-of operator should be usable throughout the STL.  Any remaining problems are bugs that should be reported to us through Microsoft Connect.  (As you might imagine, grepping for "&elem" is rather difficult!)  I haven't audited the algorithms yet, but a casual glance indicated to me that they aren't especially fond of taking the addresses of elements.

We're also going beyond C++11 in a couple of ways:

SCARY iterators: As permitted but not required by the C++11 Standard, SCARY iterators have been implemented, as described by N2911 "Minimizing Dependencies within Generic Classes for Faster and Smaller Programs" and N2980 "SCARY Iterator Assignment and Initialization, Revision 1".

Filesystem: We've added the <filesystem> header from the TR2 proposal, featuring super-cool machinery like recursive_directory_iterator.  Note that the 2006 proposal (before work on TR2 was frozen due to C++0x running extremely late and turning into C++11) was derived from Boost.Filesystem V2.  It later evolved into Boost.Filesystem V3, but that will not be implemented in VC11.

Finally, in addition to numerous bugfixes, we've performed a major optimization!  All of our containers (loosely speaking) are now optimally small given their current representations.  This is referring to the container objects themselves, not their pointed-to guts.  For example, vector contains three raw pointers.  In VC10, x86 release mode, vector was 16 bytes.  In VC11, it's 12 bytes, which is optimally small.  This is a big deal if you have 100,000 vectors in your program - VC11 will save you 400,000 bytes.  Decreased memory usage saves both space and time.

This was achieved by avoiding the storage of empty allocators and comparators, as std::allocator and std::less are stateless.  (We'll activate these optimizations for custom allocators/comparators too, as long as they're stateless.  Obviously, we can't avoid storing stateful allocators/comparators, but those are quite rare.)

Here are all of the sizes for x86 and x64.  (32-bit ARM is equivalent to x86 for these purposes).  Naturally, these tables cover release mode, as debug mode contains checking machinery that consumes space and time.  I have separate columns for VC9 SP1, where _SECURE_SCL defaulted to 1, and for VC9 SP1 with _SECURE_SCL manually set to 0 for maximum speed.  VC10 and VC11 default _SECURE_SCL to 0 (now known as _ITERATOR_DEBUG_LEVEL).

 

x86 Container Sizes (Bytes) VC9 SP1 VC9 SP1
SCL=0
VC10 VC11
vector<int> 24 16 16 12
array<int, 5> 20 20 20 20
deque<int> 32 32 24 20
forward_list<int> N/A N/A 8 4
list<int> 28 12 12 8
priority_queue<int> 28 20 20 16
queue<int> 32 32 24 20
stack<int> 32 32 24 20
pair<int, int> 8 8 8 8
tuple<int, int, int> 16 16 16 12
map<int, int> 32 12 16 8
multimap<int, int> 32 12 16 8
set<int> 32 12 16 8
multiset<int> 32 12 16 8
hash_map<int, int> 72 44 44 32
hash_multimap<int, int> 72 44 44 32
hash_set<int> 72 44 44 32
hash_multiset<int> 72 44 44 32
unordered_map<int, int> 72 44 44 32
unordered_multimap<int, int> 72 44 44 32
unordered_set<int> 72 44 44 32
unordered_multiset<int> 72 44 44 32
string 28 28 28 24
wstring 28 28 28 24

 

x64 Container Sizes (Bytes) VC9 SP1 VC9 SP1
SCL=0
VC10 VC11
vector<int> 48 32 32 24
array<int, 5> 20 20 20 20
deque<int> 64 64 48 40
forward_list<int> N/A N/A 16 8
list<int> 56 24 24 16
priority_queue<int> 56 40 40 32
queue<int> 64 64 48 40
stack<int> 64 64 48 40
pair<int, int> 8 8 8 8
tuple<int, int, int> 16 16 16 12
map<int, int> 64 24 32 16
multimap<int, int> 64 24 32 16
set<int> 64 24 32 16
multiset<int> 64 24 32 16
hash_map<int, int> 144 88 88 64
hash_multimap<int, int> 144 88 88 64
hash_set<int> 144 88 88 64
hash_multiset<int> 144 88 88 64
unordered_map<int, int> 144 88 88 64
unordered_multimap<int, int> 144 88 88 64
unordered_set<int> 144 88 88 64
unordered_multiset<int> 144 88 88 64
string 40 40 40 32
wstring 40 40 40 32

 

Stephan T. Lavavej
Visual C++ Libraries Developer

Building Metro Style Apps with C++ and JavaScript

$
0
0

Raman SharmaHi, I’m Raman Sharma, a Program Manager with the Visual C++ team.

As seen a few weeks ago at //BUILD, in the next version of Visual Studio you’ll be able to create Metro style app with Javascript. But this doesn't mean that Javascript is the only language you can use. There are several reasons to leverage compiled code as well.

In this video I recently posted in Channel 9, you’ll learn how your Metro style app with HTML5 can directly access native code.

I'll show you how to access your C++ Windows Runtime components from JavaScript.

After this talk, you'll understand how to combine native and script code in order to build the most compelling Metro style applications.

 

[Watch this video in Channel 9]

Asynchronous Operations in Windows 8 with the Parallel Patterns Library (PPL)

$
0
0

The Concurrency Runtime team is working on the next generation of the Parallel Patterns Library (PPL) that will help you consume asynchronous operations in your apps using a simpler development model than the one Windows 8 has built-in as part of its Windows Runtime.

In that sense, they recently published a blog post explaining the rationale behind their abstraction library, and not less importantly, a set of samples showing these in action.


C9::GoingNative 2: the Windows Runtime Library (WRL)

$
0
0

Click to watch the episode in Channel 9We're back with the third installment of C9::GoingNative.

At the recent //BUILD conference, we introduced a series of technologies targeting the upcoming version of the Windows platform. One of those consists in some extensions to the C++ language, intended to help developers bridge their C++ logic to the Windows Runtime (WinRT) environment.

C++/CX (the name of these extensions) is a lightweight syntax for COM creation, being COM the framework that allows components written in different languages to interoperate in Windows. In practice, it allows the user interface to be designed with ad hoc tools like MS Expression (XAML) or any HTML5 editor, while adding application behavior in C++.

The reception of C++/CX is mixed so far. It’s being appreciated for those developers who considered COM a complex technology despite its usefulness. It’s not much liked by developers who dealt with COM or the Active Template Library (ATL), an abstraction layer to make COM creation easier.

These last ones asked about an approach that doesn’t involve non-standard language extensions but an API that encapsulated COM complexities. Such API is called Windows Runtime Library (WRL) and follows the principles of ATL, re-implementing those for the Windows Runtime though.

In this episode, we interviewed Sridhar Madhugiri, one of the authors of the WRL, who answered for us questions like When would you use WRL? Why would you use WRL? How do you use WRL?

Prior to that Tarek Madkour, a leader on the VC++ team, shares some wise perspectives on modern C++ for Windows 8 (Metro style apps). Enjoy this episode!!

Announcing Wrox's Professional C++, 2nd Edition

$
0
0

9780470932445 cover.inddGreetings! I’m Marc Gregoire, a Microsoft MVP VC++ since 2007, and I’m pleased to tell you that I’ve finished work on my book “Professional C++, Second Edition”.

This second edition includes the latest C++ standard, C++11, and is based on the great first edition written by Nicholas A. Solter, Scott J. Kleper. The book is published by Wiley/Wrox.

Here is the official description for the book:

Essential reading for experienced developers who are determined to master the latest release of C++

Although C++ is often the language of choice from game programming to major commercial software applications, it is also one of the most difficult to master. With this no-nonsense book, you will learn to conquer the latest release of C++. The author deciphers little-known features of C++, shares detailed code examples that you can then plug into your own code, and reveals the significant changes to C++ that accompany the latest release. You'll discover how to design and build applications that solve real-world problems and then implement the solution using the full capabilities of the language.

Appeals to experienced developers who are looking for a higher level of learning

  • Drills down the extensive changes to the latest C++ standard, C++11, including enhancements made to run-time performance, standard library, language usability, and core language
  • Zeroes in on explaining the more poorly understood elements of the C++ feature set and addresses common pitfalls to avoid
  • Includes case studies that feature extensive, working code that has been tested on Windows and Linux platforms
  • Intertwines text with useful tips, tricks, and workarounds

Packed with best practices for programming, testing, and debugging applications, this book is vital for taking your C++ skills to the next level.

Links:

C9 Lecture: C Runtime (CRT) Topics, by Mahmoud Saleh

$
0
0

In a recent Going Native episode in Channel 9, Charles (Torre) and I interviewed Mahmoud Saleh, the software engineer who keeps the C Runtime library.

Now Mahmoud prepared for you a 1-hour lecture on some of the CRT topics like

  • Memory leak detection (CRT debug heap).
  • Unhandled exceptions.
  • Assert and error reporting.
  • CRT entry points.
  • CRT support for Unicode.
  • SBCS and MBCC. And
  • Optimizing file IO in CRT.

[Watch this Channel 9 lecture]

Mahmoud Saleh on the C Runtime (CRT)

Inside the C++/CX Design

$
0
0

Jim SpringfieldHello. This is Jim Springfield, an architect on the Visual C++ team.

Today, I want to give some insight into the new language extensions, officially called C++/CX, which was designed to support the new API model in Windows 8. If you attended //BUILD/, watched some of the sessions online, or have been playing with the prerelease of Visual Studio, you probably have seen some of the “new” syntax. For anyone who is familiar with C++/CLI (i.e. the language extensions we provide for targeting the CLR), the syntax shouldn’t seem much different.

Please note, however, that while the C++/CX syntax is very similar to C++/CLI, the underlying implementation is very different, it does not use the CLR or a garbage collector, and it generates completely native code (x64, x64, ARM depending on target).

Early on in the design of our support for Windows 8, we looked at many different ideas including a pure library approach as well as various ways to integrate support in the language. We have a long history of supporting COM in the Visual C++ team. From MFC to ATL to #import to attributed ATL. We also have a good bit of experience at targeting the CLR including the original managed extensions, C++/CLI, and the IJW support for compiling native code to MSIL. Our design team consisted of seven people and included people who worked on these and who have lots of experience in libraries, compiler implementation, and language design.

We actually did develop a new C++ template library for Windows 8 called WRL (Windows Runtime Library) that does support targeting Windows 8 without language extensions. WRL is quite good and it can be illuminating to take a look at it and see how all of the low-level details are implemented. It is used internally by many Windows teams, although it does suffer from many of same problems that ATL does in its support of classic COM.

  1. Authoring of components is still very difficult. You have to know a lot of the low-level rules about interfaces.
  2. You need a separate tool (MIDL) to author interfaces/types.
  3. There is no way to automatically map interfaces from low-level to a higher level (modern) form that throws exceptions and has real return values.
  4. There is no unification of authoring and consumption patterns.

With some of the new concepts in the Windows Runtime, these drawbacks become even more difficult than in classic COM/ATL. Interface inheritance isn’t vtable-based like it is in classic COM. Class inheritance is based on a mechanism similar to aggregation but with some differences including support for private and protected interfaces. We quickly realized that although there is a need for a low-level tool like WRL, for the vast majority of uses, it is just too hard to use and we could do a lot better while still preserving performance and providing a lot of control.

The #import feature that was available in VC6 provides a good mechanism for consuming COM objects that have a type library. We thought about providing something similar for the Windows Runtime (which uses a new .winmd file), but while that could provide a good consumption experience, it does nothing for authoring. Given that Windows is moving to a model where many things are asynchronous, authoring of callbacks is very important and there aren’t many consumption scenarios that wouldn’t include at least some authoring. Also, authoring is very important for writing UI applications as each page and user-defined control is a class derived from an existing Runtime class.

The design team spent a lot of time discussing what consumption of Windows Runtime components should look like. We decided early on that we should expose classes and interfaces at a higher level than what the ABI defines. Supporting modern C++ features such as exceptions was deemed to be important as well as mapping the Runtime definition of inheritance (both for interfaces and classes) to C++ in such a way that it was natural. It quickly became clear that we would need some new type categories to represent these as we couldn’t change what the existing C++ ABI meant. We went through a lot of different names and it wasn’t until we decided to use the ^ that we also decided to use ref class to indicate the authoring of a Windows Runtime class.

We also spent a lot of time exploring various approaches to how to hold onto a pointer to a WinRT class or interface. Part of this decision was also how to tell the difference between a low-level version of an interface and the high-level version of the interface. We had a lot of different proposals including just using a *, using * with a modifier, and using various other characters such as the ‘@’ symbol. In the original extensions we did for managed code, we in fact did use a * with a modifier (__gc). We realized we would have many of the same problems if we followed that route. Some of the breakthroughs came when we started thinking about what the type of a pointer dereference would be. This made us realize that what we were doing was similar to what we did when C++/CLI was designed. At one point, someone suggested “Why don’t we just use the ^ symbol?” After the laughter died down, it started making a lot of sense. On design point after design point, we often came to the same design decision we had made for C++/CLI.

Many of the concepts we were trying to express were already present in the C++/CLI syntax. Given that reference counting is a form of garbage collection, using ^ to represent a “refcounted” pointer in ZW fits quite well. Dereferencing of a ^ yields a %, also like C++/CLI. While many concepts are expressed the same way, there are a few areas where we decided to deviate from C++/CLI. For example, in C++/CX, the default interface on a class is specified through an attribute in the interface list while in C++/CLI it is an attribute on the class itself.

In C++/CX we have a much better story than C++/CLI when it comes to interoperating references types with regular types. In C++/CLI, managed objects can move around in memory as the garbage collector runs. This means you can’t get the real address of a member (without pinning) or even embed anything except primitive types (i.e. int) into your class. You also cannot put a ^ into a native class or struct. In C++/CX, objects do not move around in memory and thus all of these restrictions are gone. You can put any type into a ref class and you can put a ^ anywhere. This model is much friendlier to normal C++ types and gives the programmer more flexibility in C++/CX.

We will be providing more insight into our design over the coming months. If there are specific things you would like to know more about, please let us know.

Try It Now: Use PPL to Produce Windows 8 Asynchronous Operations

$
0
0

There's a new revision of the Concurrency Runtime and Parallel Pattern Library sample pack that demonstrates a convenient way of consuming and producing Windows Runtime asynchronous operations using PPL.

Read the announcement at sister blog Parallel Programming in Native Code.

GoingNative 3: Marian Luparu answers about C++/CX

$
0
0

Click to watch the episode in Channel 9C++/CX language design team member Marian Luparu sits in the hot seat to answer some questions (a few from the GoingNative community - thank you!), draw on the whiteboard and demo some code.

[Watch the episode in Channel 9]

Game Debugging in Visual Studio 11

$
0
0

Hi! I am Amit Mohindra, a Program Manager on the Visual C++ team.

We believe Metro style games and graphics-intensive apps present a huge opportunity for developers on new devices such as tablets. The primary API for accessing the full power of the underlying graphics hardware on Windows is DirectX 11 (including Direct3D and Direct2D).

One of the most significant innovations we have brought to Visual Studio 11 is a series of tools for assisting you in developing Direct3D games. We made a quick video of some of these features on Channel9 (link). In this post, I will walk through our debugging & diagnostics support for D3D.

The new Graphics Debugger in Visual Studio is a debugging and analysis tool that captures detailed information from a Direct3D application as it executes. You can use it to:

  • Capture rendered frames for later inspection and analysis.
  • View DirectX events and their effects on the application.
  • View 3D meshes before and after vertex shader transformations.
  • Discover which DirectX events contribute to the color of a specific pixel.
  • Jump directly to the location in source code for a particular DirectX call

Let’s try to hypothetically solve a simple problem step by step using the graphics debugger. For this blog the game we’re working on is a rotating Die game. Here is what it should look like.

However, when we run the application we find out that the Die does not get rendered in the game.

To start debugging this game, right-click on the project in Solution Explorer and set “Enable Graphics Capture” to “Yes” in the Debugging node.

Now, F5 to start debugging your application and you will notice that the game now has some fundamental statistics displayed in the top left corner. This indicates that VS is ready to capture diagnostic information from the game for you to investigate the rendering issues.

In order to capture a frame, just hit the “Print Screen” key. You can repeat this as often as you like; each frame will show up in Visual Studio as part of the “Graphics Experiment.vsglog” file. The log file contains all the information required for you to debug the rendering issues. The file by default is located in a temporary location, but you can choose to save the file and share it with other developers.

Let’s dig further into what’s going on by analyzing the captured frame in Visual Studio. To understand it better I want to know what got drawn when I called the “DrawIndexed” API in my code.

To inspect the “DrawIndexed” call right-click on the frame thumbnail and select “Events List” from the context menu to bring up the events list window.

The Graphics Event List window lists all the DirectX events captured by running a program under the VS debugger. It also simulates the events under the graphics debugger, re-running the commands using the same inputs as used by the running program. In the search box you could type “Draw” to filter the list to show the DirectX draw calls. Clicking through the draw calls you will be able to see in the frame window (on the right) how that frame was drawn piece by piece.

In the search box in this window type “Draw”, this will filter the list to show the draw calls made by the game. Select the “DrawIndexed” call in the events list and check to see in the frame window if something gets rendered.

Unfortunately in this scenario as seen above nothing is rendered so we need to continue our debugging process. A draw API (DrawIndexed) submits work to the rendering pipeline. So let’s inspect the rendering pipeline to see if something was submitted as part of the “DrawIndexed” call. In order to see what was going on in the graphics pipeline when the draw call was executed, right-click on the call in the events list and bring up the pipeline viewer by selecting “Pipeline Viewer” from the context menu.

The Graphics pipeline viewer shows the different stages in the graphics pipeline and how those stages modify your model.

The viewer shows four different views of the mesh data; these correspond to four different stages in the pipeline

 

View Description
Pre-Vertex Shader View the mesh vertices before the vertex shader. The camera is pointed at the center of the object.
Post Vertex Shader View the mesh vertices after the vertex shader. The camera is pointed at the center of the object.
Geometry Shader
(not seen in this example as it is not used)
View the mesh vertices after the geometry shader. The camera is pointed at the center of the object. If there is no geometry shader assigned, this view is blank (a solid shade of light gray).
View Port View the mesh vertices in screen space. The camera is set up identical to the camera in the target program.

 

Note there are other stages in the graphics pipeline which are not represented in this view. For more on the new graphics pipeline stages in DirectX 11, see here.

In the graphics pipeline view you can see that something is being drawn that looks like the Die that the application wanted to render. It seems from the figure above that the “Die” (cube) structure is going through the graphics pipeline correctly indicating that the shaders are functioning correctly (they aren’t modifying the original structure in a way that would prevent them from being rendered).

At this point we need to continue debugging and take a look elsewhere. It could be that the pixels being overwritten or discarded. To verify this hypothesis lets go take a look at the pixels in the center of the frame (where the Die is supposed to be rendered). In the frame buffer window click in the center to first select a pixel and then right click and choose “Pixel History” from the context menu.

The Graphics Pixel History window displays the activities that contribute to the color of the selected pixel in the current frame. It contains information about how the pixel was modified by each DirectX event in the pixel history window. This includes the initial frame buffer state, followed by the intermediate draw events of rendering and the final result.

We can see from the “Pixel History” window that the frame buffer color for the pixel was blue, and then the pixel shader applied a different color (grey) to the pixel. However, the final color is still blue. The desired pixel color being rendered is not getting applied at all. This is an indication that the blend state might not be set correctly. Blend state controls how color and alpha values are blended when combining rendered data with existing render target data. To inspect the blend state, right click on the pixel and choose “Object Table” from the context menu. This will pop up the “Graphics Object Table” window.

The Graphics Object Table displays Direct3D objects created by the target program. Any object that contains state information can be viewed by double clicking on the object in the table.

Sort the object table (by “Type”) and then scroll to the list of “D3D11 Blend State” object. Double click on the “D3D11 Blend State” object will the lowest valid (not N/A) “Usage Age”. “Usage Age” helps scope the object to investigate in relation to the selected draw call, lower the usage age the more relevant it is. Double click on the blend state object with the lowest usage age and it will open a document with details of the object inside Visual Studio.

 

From the picture above we can see:

    Blend Source = D3D11_BLEND_ZERO; //Source represents the color output by the pixel shader on which blend operation is performed
    Blend Destination = D3D11_BLEND_ONE; //Destination represents the color of the frame on which blend operation is performed
    Blend Op = D3D11_BLEND_ADD; //Blend operation to perform with the source and destination.

We can see from this that the blend state for the source is being masked since it is set to zero (D3D11_BLEND_ZERO) and the blend state for the destination is set to one (D3D11_BLEND_ONE). This causes the background color to come out as the primary color after blending thus we don’t see the Die being rendered. The values should be switched such that the output of the pixel shader is primary output of the blending operation.

    Blend Source = D3D11_BLEND_ONE;
    Blend Destination = D3D11_BLEND_ZERO;
    Blend Op = D3D11_BLEND_ADD;

To find out where the blend state is being set we can go back to the event list window and filter the event list to look for “blendstate” since the event list lists all the DirectX events that transpired for the captured frame. Once you filter you can see calls to “OMSetBlendState”. “OMSetBlendState” is the call that sets the blend state. To get to the source code where this call is being made right click on the “OMSetBlendState” call and chose “Call Stack” from the context menu to bring up the “Graphics Call Stack” window

The Graphics Event Callstack window ties the DirectX events to the C++ source code being debugged.

Double click on the first call in the call stack and it takes you to the “OMSetBlendState” call in the code. A few lines above the “OMBlendState” call and you will notice the lines of code where source and destination blend values are set.

Switch the two states by changing the code to reflect the following:

    blenddesc.RenderTarget[0].SrcBlend=D3D11_BLEND_ONE;
    blenddesc.RenderTarget[0].DestBlend=D3D11_BLEND_ZERO;

Build and Run.

 

 

Summary:

Game development isn’t easy and we think tools should help you make use of the underlying DirectX platform. In this release, we are excited to bring this new type of diagnostics experience directly into Visual Studio. These features are still under development and could potentially change or not even be included in our final product. These features are only available of Visual Studio Professional and above SKU’s.


Code Analysis in Visual C++ 11

$
0
0

Code Analysis in Visual C++ 11The Microsoft Security Science team has recently posted a note about Security Development Lifecycle integration as part of the Code Analysis rules coming with the next version of Visual C++.

[Read the full post]

Announcing GoingNative 2012 Conference

$
0
0

Register to GoingNative 2012 today

We know developers are hungry for information about native development. The GoingNative conference  aims to provide current technical information to as many people as possible.

Register now!

GoingNative 2012 is a 48 hour technical event for those who push the boundaries of general purpose computing by exploiting the true capabilities of the underlying machine: C++ developers. Distinguished speakers include the creator of C++, Bjarne Stroustrup, C++ Standards Committee Chair, Herb Sutter, C++ template and big compute master, Andrei Alexandrescu, STL master Stephan T. Lavavej, and more! Official agenda will be released over the next month or so. Join us!

 

Feb 2-3, 2012
Microsoft Corporate Campus
Building 33
Redmond, WA, USA

 

  • Streamed live (on-demand < 24 hours later, each day) right here.
  • Evening event (party - great food(dinner), music, drink and people!
  • Shuttles from Bellevue's Lincoln Square (where we recommend booking your hotel)

 

Hurry up and reserve your spot!!


UPDATE: read Herb Sutter's post on GoingNative 2012.

Compiler Security Enhancements in Visual Studio 11

$
0
0

Compiler Security Enhancements in Visual Studio 11Tim Burrell (MSEC Security Science Team) just posted a new article in the Security Development Lifecycle (SDL) blog.

 

[Read article here]

GoingNative 2012 Full Schedule

$
0
0

Charles has recently published the agenda for GoingNative 2012 –the first C++ only event done in MS in many years.

Great speakers and compelling topics. Take a look here.

Enhancements to /GS in Visual Studio 11

$
0
0

Tim Burrell outlines more of the work done by Security Science & the Visual Studio team.

He previously noted that they are updating the on-by-default /GS compiler switch, which provides protection against some memory safety bugs such as buffer overflows. In a new post he provides additional information on those changes.

[Go to the article]

Viewing all 437 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>