Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create TextureStream.md #2743

Open
wants to merge 19 commits into
base: main
Choose a base branch
from
Open
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Address missing basic markup for md file.
sunggook committed Sep 13, 2022
commit d896d6b4d117186dd5b9295691a826473c6f0a36
49 changes: 30 additions & 19 deletions specs/APIReview_TextureStream.md
Original file line number Diff line number Diff line change
@@ -1,31 +1,38 @@

TextureStream
===============================================================================================

# Background
Many native apps use a native engine for real-time communication scenarios, which include video
capture, networking and video rendering. However, often, these apps still use WebView or
Electrion for UI rendering. The separation between real-time video rendering and UI rendering
Many native apps use a native engine for real-time communication scenarios, which include video
capture, networking and video rendering. However, often, these apps still use WebView or
Electron for UI rendering. The separation between real-time video rendering and UI rendering
prevents apps from rendering real-time video inside the web contents. This forces apps to
render the real-time video on top of the web contents, which is limiting. Rendering video on
top constrains the user experience and it may also cause performance problems.
We can ask the native apps to use web renderer for video handling because web standard already
provides these features through WebRTC APIs. The end developers, however, prefer to use
render the real-time video on top of the web contents, which is limiting. Rendering video on
top constrains the user experience and it may also cause performance problems.
We can ask the native apps to use web renderer for video handling because web standard already
provides these features through WebRTC APIs. The end developers, however, prefer to use
their existing engine such as capturing and composition, meanwhile using WebRTC API for rendering.

# Description
The proposed APIs will allow the end developers to stream the captured or composed video frame to
the WebView renderer where Javascript is able to insert the frame to the page through W3C standard
The proposed APIs will allow the end developers to stream the captured or composed video frame to
the WebView renderer where Javascript is able to insert the frame to the page through W3C standard
API of Video, MediaStream element for displaying it.
The API will use the shared GPU texture buffer so that it can minimize the overall cost with
The API will use the shared GPU texture buffer so that it can minimize the overall cost with
regards to frame copy.

# Examples
Javascript

## Javascript

This is Javascript code common to both of the following samples:

```js
// User click the video capture button.
document.querySelector('#showVideo').addEventListener('click',
e => getStreamFromTheHost(e));
async function getStreamFromTheHost(e) {
try {
// Request stream to the host with unique stream id.
const stream = await window.chrome.webview.getTextureStream('webview2-abcd1234');
const stream = await window.chrome.webview.getTextureStream('webview2-abcd1234');
// The MediaStream object is returned and it gets video MediaStreamTrack element from it.
const video_tracks = stream.getVideoTracks();
const videoTrack = video_tracks[0];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

video_tracks and videoTrack are unused?

@@ -35,13 +42,16 @@ async function getStreamFromTheHost(e) {
console.log(error);
}
}
Win32 C++
```

## Win32 C++
```cpp
UINT32 luid,
// Get the LUID (Graphic adapter) that the WebView renderer uses.
coreWebView->GetRenderAdapterLUID(&luid);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Many calls throughout here are missing error handling - aren't checking the HRESULT return value. In our sample code we have a CHECK_HRESULT (or something like that) macro we use.

// Create D3D device based on the WebView's LUID.
ComPtr<D3D11Device> d3d_device = MyCreateD3DDevice(luid);
// Register unique texture stream that the host can provide.
// Register unique texture stream that the host can provide.
ComPtr<ICoreWebView2TextureStream> webviewTextureStream;
g_webviewStaging3->CreateTextureStream(L"webview2-abcd1234", d3d_device.Get(), &webviewTextureStream);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Usually in WebView2, object creation happens on the CoreWebView2Environment as it acts sort of like a factory. Then the object may have additional initialization (like setting up event handlers) and then add or use the object with a CoreWebView2.

Here it looks like the Create method is on the CoreWebView2 and creating it also adds it (by the name parameter) to available texture streams for that CoreWebView2. Is the TextureStream tied to that CoreWebView2 in particular? Or can we move the TextureStream creation to the CoreWebView2Environment and have a separate method for 'adding' it to a CoreWebView2? If so this would have the benefits of:

  • Clearer from API calls when the TextureStream has been added to a WebView2.
  • No concern over races where the TextureStream has been created with a particular name, but the event handlers and such haven't been setup yet.
  • Able to use the same one TextureStream object with different CoreWebView2s or the same CoreWebView2 but with different names.

// Register the Origin URL that the target renderer could stream of the registered stream id. The request from not registered origin will fail to stream.
@@ -78,7 +88,7 @@ webviewTextureStream->add_TextureError(Callback<ICoreWebView2StagingTextureStrea
}
return S_OK;
}).Get(), &texture_token);

// TextureStream APIs are called in the UI thread on the WebView2 process meanwhile Video capture
// and composition could happen in worker thread or out of process.
LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam)
@@ -131,8 +141,10 @@ LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam)
break;
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need ``` to end code sample block here

```

# API Details
Win32 C++
```
[v1_enum]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
[v1_enum]
```c# (but really IDL)
[v1_enum]

typedef enum COREWEBVIEW2_TEXTURE_STREAM_ERROR_KIND {
/// The host can't create a TextureStream instance more than once
@@ -349,5 +361,4 @@ interface ICoreWebView2StagingRenderAdapterLUIDUpdatedEventHandler : IUnknown {
[in] ICoreWebView2Staging3 * sender,
[in] IUnknown* args);
}


```