Skip to content

Conversation

@firewave
Copy link
Collaborator

No description provided.

@firewave
Copy link
Collaborator Author

I tried to do this via an option. It would be possible to pass this as an internal field via the Settings but it would be problematic to get that into all Settings objects in the tests.

@firewave
Copy link
Collaborator Author

The idea is to have a corpus generated from all our test cases which will generate an artifact which can be pulled from the OSS-Fuzz integration to be used. This will give us a broad spectrum of code.

Some of the files are rather big though and I wonder if we should filter those out since they could provide misleading timeouts and too big inputs. But that could also be handled by specify the maximum input size via a fuzzing CLI parameter.

@firewave firewave force-pushed the testrunner-dump branch 6 times, most recently from 1e619f4 to 49fba2a Compare January 17, 2025 17:23
@firewave firewave changed the title generate a fuzzing corpus by extracting code from testrunner refs #12442 - generate a fuzzing corpus by extracting code from testrunner Jan 17, 2025
@firewave firewave force-pushed the testrunner-dump branch from 49fba2a to c8d31df Compare May 5, 2025 10:12
@sonarqubecloud
Copy link

@firewave firewave force-pushed the testrunner-dump branch 2 times, most recently from 8bdab58 to 423ff5b Compare October 21, 2025 07:12
@sonarqubecloud
Copy link

@firewave
Copy link
Collaborator Author

If the code relies on additional files (i.e. includes) it would not work as expected. This suggests that we should move tests which require additional files from unit tests to Python.

@sonarqubecloud
Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant