This commit adds a new `RingIO` type which exposes the internal ring-buffer
code for general use in Python programs. It has the stream interface
making it similar to `StringIO` and `BytesIO`, except `RingIO` has a fixed
buffer size and is automatically safe when reads and writes are in
different threads or an IRQ.
This new type is enabled at the "extra features" ROM level.
Signed-off-by: Andrew Leech <andrew.leech@planetinnovation.com.au>
Added the "long" modffi tests. The tests could not be added to the existing
ffi_types test because two .exp files were required for the 32-bit and
64-bit results. Code common to both the ffi_types and type "long" tests was
factored into ffi_int_base. ffi_types was renamed to ffi_int_types to group
the related tests under the "ffi_int" prefix.
Signed-off-by: Michael Sawyer <mjfsawyer@gmail.com>
Necessary to pass CI when testing the V2 preview APIs.
Also adds an extra coverage test for the legacy stackctrl API, to maintain
coverage and check for any regression.
This work was funded through GitHub Sponsors.
Signed-off-by: Angus Gratton <angus@redyak.com.au>
Updates rp2 port to always resume from idle within 1ms max.
When rp2 port went tickless the behaviour of machine.idle() changed as
there is no longer a tick interrupt to wake it up every millisecond. On a
quiet system it would now block indefinitely. No other port does this.
See parent commit for justification of why this change is useful.
Also adds a test case that fails without this change.
This work was funded through GitHub Sponsors.
Signed-off-by: Angus Gratton <angus@redyak.com.au>
This commit makes it so that PyProxy objects are reused (on the JavaScript
side) when they correspond to an existing Python object that is the same
object.
For example, proxying the same Python function to JavaScript, the same
PyProxy instance is now used. This means that if `foo` is a Python
function then accessing it on the JavaScript side such as
`api.globals().get("foo")` has the property that:
api.globals().get("foo") === api.globals().get("foo")
Prior to this commit the above was not true because new PyProxy instances
were created each time `foo` was accessed.
Signed-off-by: Damien George <damien@micropython.org>
Before this change, long/mpz ints propagated into all future calculations,
even if their value could fit in a small-int object. With this change, the
result of a big-int binary op will now be converted to a small-int object
if the value fits in a small-int.
For example, a relatively common operation like `x = a * b // c` where
a,b,c all small ints would always result in a long/mpz int, even if it
didn't need to, and then this would impact all future calculations with
x.
This adds +24 bytes on PYBV11 but avoids heap allocations and potential
surprises (e.g. `big-big` is now a small `0`, and can safely be accessed
with MP_OBJ_SMALL_INT_VALUE).
Performance tests are unchanged on PYBV10, except for `bm_pidigits.py`
which makes heavy use of big-ints and gains about 8% in speed.
Unix coverage tests have been updated to cover mpz code that is now
unreachable by normal Python code (removing the unreachable code would lead
to some surprising gaps in the internal C functions and the functionality
may be needed in the future, so it is kept because it has minimal
overhead).
This work was funded through GitHub Sponsors.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
In JavaScript when accessing an attribute such as `obj.attr` a value of
`undefined` is returned if the attribute does not exist. This is unlike
Python semantics where an `AttributeError` is raised. Furthermore, in some
cases in JavaScript (eg a Proxy instance) `attr in obj` can return false
yet `obj.attr` is still valid and returns something other than `undefined`.
So the source of truth for whether a JavaScript attribute exists is to just
right away attempt `obj.attr`.
To more closely match these JavaScript semantics when proxying a JavaScript
object through to Python, change the attribute lookup logic on a `JsProxy`
so that it immediately attempts `obj.attr` instead of first testing if the
attribute exists via `attr in obj`.
This allows JavaScript objects which dynamically create attributes to work
correctly on the Python side, with both `obj.attr` and `obj["attr"]`. Note
that `obj["attr"]` already works in all cases because it immediately does
the subscript access without first testing if the attribute exists.
As a benefit, this new behaviour matches the Pyodide behaviour.
Signed-off-by: Damien George <damien@micropython.org>
Follow-up to a84c7a0ed93, this commit works most of the time but has an
intermittent bug where USB doesn't resume as expected after waking from
light sleep.
Turns out waking calls clocks_init() which will re-initialise the USB PLL.
Most of the time this is OK but occasionally it seems like the clock
glitches the USB peripheral and it stops working until the next hard reset.
Adds a machine.lightsleep() test that consistently hangs in the first
two dozen iterations on rp2 without this fix. Passed over 100 times in a
row with this fix.
The test is currently rp2-only as it seems similar lightsleep USB issues
exist on other ports (both pyboard and ESP32-S3 native USB don't send any
data to the host after waking, until they receive something from the host
first.)
This work was funded through GitHub Sponsors.
Signed-off-by: Angus Gratton <angus@redyak.com.au>
This allows increasing the Python recursion depth if needed.
Also increase the default to 2k words. There is enough RAM in the
browser/node context for this to be increased, and having a larger pystack
allows more complex code to run without hitting the limit.
Signed-off-by: Damien George <damien@micropython.org>
In the webassembly port there is no asyncio run loop running at the top
level. Instead the Python asyncio run loop is scheduled through setTimeout
and run by the outer JavaScript event loop. Because tasks can become
runable from an external (to Python) event (eg a JavaScript callback), the
run loop must be scheduled whenever a task is pushed to the asyncio task
queue, otherwise tasks may be waiting forever on the queue.
Signed-off-by: Damien George <damien@micropython.org>
This change allows doing a top-level await on an asyncio primitive like
Task and Event.
This feature enables a better interaction and synchronisation between
JavaScript and Python, because `api.runPythonAsync` can now be used (called
from JavaScript) to await on the completion of asyncio primitives.
Signed-off-by: Damien George <damien@micropython.org>
Instead of raising KeyError. These semantics match JavaScript behaviour
and make it much more seamless to pass Python dicts through to JavaScript
as though they were JavaScript {} objects.
Signed-off-by: Damien George <damien@micropython.org>
This adds a new undefined singleton to Python, that corresponds directly to
JavaScript `undefined`. It's accessible via `js.undefined`.
Signed-off-by: Damien George <damien@micropython.org>
This reverts part of commit fa23e4b093f81f03a24187c7ea0c928a9b4a661b, to
make it so that Python `None` converts to JavaScript `null` (and JavaScript
`null` already converts to Python `None`). That's consistent with how the
`json` module converts these values back and forth.
Signed-off-by: Damien George <damien@micropython.org>
And change Py None conversion so it converts to JS undefined.
The semantics for conversion of these objects are then:
- Python None -> JavaScript undefined
- JavaScript undefined -> Python None
- JavaScript null -> Python None
This follows Pyodide:
https://pyodide.org/en/stable/usage/type-conversions.html
Signed-off-by: Damien George <damien@micropython.org>
This commit defines a new `JsException` exception type which is used on the
Python side to wrap JavaScript errors. That's then used when a JavaScript
Promise is rejected, and the reason is then converted to a `JsException`
for the Python side to handle.
This new exception is exposed as `jsffi.JsException`.
Signed-off-by: Damien George <damien@micropython.org>
JavaScript semantics are such that the caller of an async function does not
need to await that function for it to run to completion. This commit makes
that behaviour also apply to top-level async Python code run via
`runPythonAsync()`.
Signed-off-by: Damien George <damien@micropython.org>
The `reason` in a rejected promise should be an instance of `Error`. That
leads to better error messages on the JavaScript side.
Signed-off-by: Damien George <damien@micropython.org>
This commit adds a significant portion of the existing MicroPython asyncio
module to the webassembly port, using parts of the existing asyncio code
and some custom JavaScript parts.
The key difference to the standard asyncio is that this version uses the
JavaScript runtime to do the actual scheduling and waiting on events, eg
Promise fulfillment, timeouts, fetching URLs.
This implementation does not include asyncio.run(). Instead one just uses
asyncio.create_task(..) to start tasks and then returns to the JavaScript.
Then JavaScript will run the tasks.
The implementation here tries to reuse as much existing asyncio code as
possible, and gets all the semantics correct for things like cancellation
and asyncio.wait_for. An alternative approach would reimplement Task,
Event, etc using JavaScript Promise's. That approach is very difficult to
get right when trying to implement cancellation (because it's not possible
to cancel a JavaScript Promise).
Signed-off-by: Damien George <damien@micropython.org>
This optimises the case where a Python function is, for example, stored to
a JavaScript attribute and then later retrieved from Python. The Python
function no longer needs to be a proxy with double proxying needed for the
call from Python -> JavaScript -> Python.
Signed-off-by: Damien George <damien@micropython.org>
This new DMA API corrects possible cache coherency issues on chips with
D-Cache, when working with buffers at arbitrary memory locations (i.e.
supplied by Python code).
The API is used by SPI to fix an issue with corrupt data when reading from
SPI using DMA in certain cases. A regression test is included (it depends
on external hardware connection).
Explanation:
1) It's necessary to invalidate D-Cache after a DMA RX operation completes
in case the CPU reads (or speculatively reads) from the DMA RX region
during the operation. This seems to have been the root cause of issue
#13471 (only when src==dest for this case).
2) More generally, it is also necessary to temporarily mark the first and
last cache lines of a DMA RX operation as "uncached", in case the DMA
buffer shares this cache line with unrelated data. The CPU could
otherwise write the other data at any time during the DMA operation (for
example from an interrupt handler), creating a dirty cache line that's
inconsistent with the DMA result.
Fixes issue #13471.
This work was funded through GitHub Sponsors.
Signed-off-by: Angus Gratton <angus@redyak.com.au>
The current `ssl` module has quite a few differences to the CPython
implementation. This change moves the MicroPython variant to a new `tls`
module and provides a wrapper module for `ssl` (in micropython-lib).
Users who only rely on implemented comparible behavior can continue to use
`ssl`, while users that rely on non-compatible behavior should switch to
`tls`. Then we can make the facade in `ssl` more strictly adhere to
CPython.
Signed-off-by: Felix Dörre <felix@dogcraft.de>
The timing of the DMA transfer can vary a bit, so tweak the allowed values.
Also test the return value of `rp2.DMA.irq.flags()` to make sure the IRQ is
correctly signalled.
Signed-off-by: Damien George <damien@micropython.org>