-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Background image for tracing #1707
Conversation
Quick feedback:
|
@justvanrossum I tried a lot, but sadly without success, yet. The main issue I have, is: how can we display an image stored locally. It's not allowed to access files locally from an http request. Do you have an idea how to solve this issue? Thanks a lot in advance. |
I didn’t realize you’d work on that today. That approach cannot work, as you’ve found out. We need to serve images as part of the font data. |
I am not sure if this is something useful or not, but I figured out, that we can load an image via http://localhost:8000: The image is misplaced (it should actually be at the position of the rectangle), but it's there – yeah :) |
Correct position of the image: Screen.Recording.2024-10-11.at.18.25.16.mp4 |
The Image type should get an attribute named Loading of the image should be done outside of the vis draw func, although loading could perhaps be triggered by draw (but only if it's cheap), then we should trigger a canvas redraw once the image has been loaded. It is vitally important that the draw func does not do any loading itself. The effect will be that if you put a glyph in edit mode, there may be a delay before the background image appears. While it may be desirable to load images via regular http(s) instead of the websocket, this raises questions about how this data should then participate in undo and external changes. To do this correctly may add a lot of complexity that can perhaps be avoided if we allow image data to travel as base64 inside JSON messages. If we go that route, we may have to enforce a maximum image size. I have some ideas that I'm trying to formulate, about potentially avoiding base64 encoding the data, but I'm not there yet. Perhaps we can start with a simplistic approach, without worrying too much about efficiency. (I'm considering to make the infrastructure general for any binary data, not limited to image data.) ReadableFontBackend:
WritableFontBackend:
The The |
7a3327a
to
a0e4ade
Compare
Current stage of my playground: Screen.Recording.2024-11-01.at.12.44.13.mp4 |
This should allow you to get rid of the customData/ufoDir hack in designspace.py. |
3aeb1f7
to
81f646c
Compare
@justvanrossum Current stage without customData: Screen.Recording.2024-11-06.at.08.08.32.mp4 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You ignored the method signature that I suggested:
getBinaryData(name: str) -> bytes
The method is intended to retrieve a single binary blob of data (say, for an image). Returning all binary data available in the entire font is insanely inefficient.
@@ -220,6 +220,8 @@ async def _getData(self, key: str) -> Any: | |||
value = await self.backend.getCustomData() | |||
case "unitsPerEm": | |||
value = await self.backend.getUnitsPerEm() | |||
case "binaryData": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, "binaryData" is not a font attribute like "unitsPerEm" is. See general comment.
@@ -1072,6 +1076,40 @@ def _writeDesignSpaceDocument(self): | |||
self.dsDoc.write(self.dsDoc.path) | |||
self.dsDocModTime = os.stat(self.dsDoc.path).st_mtime | |||
|
|||
async def getBinaryData(self) -> dict[str, bytes]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this would return all binary data for all layers. That's insanely inefficient: we always load "big things" on demand. But see the general comment, you ignored the method signature that I prescribed.
const img = new Image(); | ||
img.src = "data:image/jpg;base64," + this.fontController.getImage(image.fileName); | ||
|
||
const sx = image.transformation.translateX; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The way to do this is to convert the decomposed transform to a "normal" transform with decomposedToTransform(image.transformation)
, and then you use the t.transformPoint()
method for the four corners.
For the next iterations, pls make sure that mypy doesn't complain (run locally before you commit). |
@@ -92,6 +93,13 @@ def glyphInfoPath(self): | |||
def glyphsDir(self): | |||
return self.path / self.glyphsDirName | |||
|
|||
@property | |||
def binaryDataPath(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please do not work on the Fontra backlend until we've actually designed how that data should be stored in there.
@@ -7,3 +7,4 @@ ufomerge==1.8.2 | |||
unicodedata2==15.1.0 | |||
watchfiles==0.24.0 | |||
skia-pathops==0.8.0.post2 | |||
pybase64==1.4.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is this for? The base64
module comes with Python.
I now understand why you set out retrieving all images, as we haven't defined a way to determine from which UFO to load an image. This can be part of the |
Ok, I'm figuring out how to do this properly from the backend's perspective, and how Fontra + UFO can work. I will write this as a new PR, covering only the infrastructure changes. |
Playground draft PR for discussion.