text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: Help us with a new logo
Body: Our logo is pretty boring, anyone with good design skills can help us design one? :) | 0easy
|
Title: How are samples generated with RegressionModel / MLPRegressor?
Body: Hi there,
I tried sklearn's `MLPRegressor` with the `RegressionModel` wrapper, and to my surprise, I was able to generate samples with `historical_forecasts` (e.g. with `num_samples = 1000`). How is this possible? Neither `RegressionModel` nor `MLPRegressor` accepts any kind of likelihood or loss function as an argument or am I missing something?
`model.supports_probabilistic_prediction` is actually false in my case but I can still generate samples.
I would be happy to use this but it is not officially supported, right?
| 0easy
|
Title: Client does not use alternative schema when provided
Body: **Describe the bug**
Client does not pass schema to the initialization of the SyncPostgresClient object. This causes any call to the database to assume that the default public database is used instead of the provided schema.
**To Reproduce**
1. Create a table in an alternate schema, where a table does not exist with the same name in the public schema.
2. Create a client object using the create_client() function and provide it an alternative schema in the ClientOptions dict.
3. Query the table or attempt an insert into the table
**Expected behavior**
Table is queried / inserted into properly.
**Screenshots**

supabase-py version 0.5.6
**Available Workaround**
It is possible to currently call the schema function on the postgrest object within the client object in order to properly set the schema.
ex: supabase.postgrest.schema('alternateSchema')
| 0easy
|
Title: nb cli functionality is documented in the cli --help but not in the docs
Body: We need to document the command in the docs in a similar manner we do for nb --help.
It should state supported formats and how to convert between them. | 0easy
|
Title: [FEATURE] boston housing
Body: sklearn wants to get rid of it ... should we have it as a fairness lesson? | 0easy
|
Title: Rename Point class to Vector
Body: We are using the Point class for 3d Vectors as well. I think the Point name is loaded, Vector is a more generalized term for a set of three numbers | 0easy
|
Title: Incorrect type hint for `value` in `Url.validate()`
Body: Currently the type hint for `value` in `.validate()` of the url classes is `value: Union[T, np.ndarray, Any]`.
It should be `value: Union[T, str, Any]` though, since we actually expect a string here, not a numpy array.
This is the case for:
- `AudioUrl`
- `AnyUrl`
- `ImageUrl`
- `VideoUrl`
- `Url3D`
| 0easy
|
Title: Fix README's Incorrect list-item indent and Misaligned table fence
Body: # See: [Codacy report](https://app.codacy.com/manual/mithi/hexapod-robot-simulator/issues?&filters=W3siaWQiOiJMYW5ndWFnZSIsInZhbHVlcyI6WyJNYXJrZG93biJdfSx7ImlkIjoiQ2F0ZWdvcnkiLCJ2YWx1ZXMiOlsiQ29kZVN0eWxlIl19LHsiaWQiOiJMZXZlbCIsInZhbHVlcyI6WyJXYXJuaW5nIl19LHsiaWQiOiJQYXR0ZXJuIiwidmFsdWVzIjpbXX0seyJpZCI6IkF1dGhvciIsInZhbHVlcyI6W119XQ==) | 0easy
|
Title: [Documentation] Add tutorial on how to apply MixUp including part of adding to the loss function.
Body: We need a tutorial on how to apply MixUp to the data and the loss function.
We probably can do it as a part of the training classification pipeline in Pytorch Lightning. | 0easy
|
Title: Render Newlines in Marketplace Description Text
Body: Newlines are currently not rendered in Marketplace Description text, this turns well formatted text into solid blocks with no paragraphs or bullet points, which is not nice to look at.
Example:
```
Key Features:
- 4-day advance notice on expiring domains
- AI-powered value estimation and ranking
- Automated daily scanning with easy schedule setup
- Focus only on potentially valuable domains
```
Renders as:
<img src="https://uploads.linear.app/a47946b5-12cd-4b3d-8822-df04c855879f/a683afc4-5e61-4336-9ff0-25d3b0008545/a0d383c3-0c79-404e-a929-3134b90d6037?signature=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJwYXRoIjoiL2E0Nzk0NmI1LTEyY2QtNGIzZC04ODIyLWRmMDRjODU1ODc5Zi9hNjgzYWZjNC01ZTYxLTQzMzYtOWZmMC0yNWQzYjAwMDg1NDUvYTBkMzgzYzMtMGM3OS00MDRlLWE5MjktMzEzNGI5MGQ2MDM3IiwiaWF0IjoxNzM1ODY0Njc2LCJleHAiOjMzMzA2NDI0Njc2fQ.Vhtzw16lmJIZdfuqDqNv2StwflIccpAZkVjddYVPuW4 " alt="image.png" width="510" />
On [https://platform.agpt.co/store/agent/autogpt/domain-drop-catcher](https://platform.agpt.co/store/agent/autogpt/domain-drop-catcher) | 0easy
|
Title: Forward/Backward Fill
Body: # Brief Description
I would like to propose a forward/backward fill method, similar to ``fill`` in tidyr. In Pandas, forward/backward fill can be achieved, but you cannot select directions for different columns within a dataframe.
# Example API
@pf.register_dataframe_method
def fill(df, columns = ..., directions = sequence of directions(up, down, updown, downup):
"""
Forward or backward fill on a column or columns of a dataframe.
"""
df[column] = df[column].ffill() #forward fill will be the default
return df
<!-- One of the selling points of pyjanitor is the API. Hence, we guard the API very carefully, and want to
make sure that it is accessible and understandable to many people. Please provide a few examples of what the API
of the new function you're proposing will look like. We have provided an example that you should modify. -->
```python
example data :
df = pd.DataFrame(data={'a':[1,2,3,None],'b':[4,5,None,6],'c':[None,None,7,8]})
a b c
0 1.0 4.0 NaN
1 2.0 5.0 NaN
2 3.0 NaN 7.0
3 NaN 6.0 8.0
#downward fill (backward fill in Pandas) on only the `a` column:
df.fill(columns="a", directions = "down")
# downward fill on ``a``, upward fill on ``c`` :
# the idea is the directions will be paired with the columns
# similar to what exists when sorting multiple columns in Pandas' sort_values
# with booleans passed to the ``ascending`` argument
df.fill(columns=["a", "c"], directions = ["down", "up"])
# currently in Pandas, ffill and bfill are applied to all the columns that have null values,
# you cannot select which columns to be forward or backward filled.
```
Still undecided on the name, as I do not feel it clearly conveys what it should do without reading the function description. | 0easy
|
Title: replace `as_matrix()` with `.values` in `format_data`
Body: The new version of pandas throws the following warning during calls to `format_data`:
```
/opt/conda/lib/python3.6/site-packages/hypertools/tools/df2mat.py:38: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.
plot_data = df_num.as_matrix()
```
We should use `df_num.values` rather than `df_num.as_matrix()` | 0easy
|
Title: [CI] Replace `:object_manager` with smaller Bazel targets.
Body: ### Description
#50885 splits `:object_manager` into smaller targets. Therefore, we can also replace `:object_manager` with smaller targets as dependencies. For example, `pull_manager_test.cc` only requires `"ray/object_manager/pull_manager.h"`, so we don't need to use the entire `object_manager`.
```
ray_cc_test(
name = "pull_manager_test",
size = "small",
srcs = [
"src/ray/object_manager/test/pull_manager_test.cc",
],
tags = ["team:core"],
deps = [
":object_manager",
"@com_google_googletest//:gtest_main",
],
)
```
Part of #50586
### Use case
_No response_ | 0easy
|
Title: Add tight layout option that enforces constant spacing between rows and columns
Body: The ProPlot "tight layout" algorithm permits variable spacing between successive rows and columns. However, for the sake of symmetry/aesthetics, users may occaionally want constant spacing like in matplotlib.
This will be trivial to implement in proplot. We can add keyword args (e.g. `wequal`, `hequal`, and `equal`) to the `Figure` class that force the tight layout algorithm to use the largest of the calculated spaces for each row-row/column-column interface.
| 0easy
|
Title: Marketplace - Some weird interaction that goes on w/ the Featured agent cards (cards get cut off on sides)
Body: ### Describe your issue.
Some weird interaction that goes on w/ the Featured agent cards that I've noticed.
When you navigate through the cards, it gets cut off on the sides.. Is it possible to make sure that the cards don't get cut off when a user is interacting with the cards? and also is it possible to make the cards expand with the window?
this awkward gap here

Can we model it like these cards that Apple has on their app store website?
Like these cards here
https://github.com/user-attachments/assets/9e02b750-5002-40e1-8a9e-3dd4f9a0883a
They're at the very bottom of this page: https://www.apple.com/app-store/
Basically these cards are exactly what I was envisioning for our cards
Take a look on how they animate, how the buttons states are (they're grey if it's the end of the list), if you expand or compress the window, the cards are still in frame
| 0easy
|
Title: Color choice for multivariate timeseries
Body: **Is your feature request related to a current problem? Please describe.**
Currently, it is not possible to choose the color of each component in a multivariate timeserie. It is only possible to choose a color for all componenents of the timeserie.
**Describe proposed solution**
Enable the user to provide a list of colors (that must be a long as the number of components in the timeserie) to set the colors of the components in the plot.
**Describe potential alternatives**
Alternatively, one could provide the name of a colormap and darts would automatically sample colors from the colormap for the components of the timeserie.
| 0easy
|
Title: add `FormattedText` component
Body: Defined as something like
```py
class FormattedText(pydantic.BaseModel, extra='forbid'):
text: str
text_format: typing.Literal['bold', 'italic', 'underline', 'strikethrough'] | None = pydantic.Field(None, serialization_alias='textFormat')
# TODO, use pydantic-extra-types Color?
text_color: str | None = pydantic.Field(None, serialization_alias='textColor')
background_color: str | None = pydantic.Field(None, serialization_alias='backgroundColor')
type: typing.Literal['FormattedText'] = 'FormattedText'
```
The items in components like `Button` and `Link` become `list[str | FormattedText | Link]`. We might want too provide a few more components specifically for typography, like `Bullets` and `Numbers`
Two reasons to do this over relying on `Markdown`:
1. Markdown is a relatively heave lazy module, that it would be nice not to have to load
2. Markdown by it's definition is terrible for templating or user defined values - it's virtually impossible to escape. Providing these types would allow text to be defined which included uses values safely | 0easy
|
Title: Unblock examples/surface_timeseries_.py for the gallery
Body: ## ๐งฐ Task
Reference: https://github.com/napari/napari/pull/7487#issuecomment-2668444863
Depends on #7487.
The surface timeseries example doesn't get built right now because at one point, nilearn was incompatible with recent numpy. That is no longer the case so we should build it because it is a badass example. | 0easy
|
Title: [nlp_data] Add OSCAR Corpus
Body: ## Description
OSCAR Corpus: https://oscar-corpus.com/
```
@inproceedings{ortiz-suarez-etal-2020-monolingual,
title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
author = "Ortiz Su{\'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Beno{\^\i}t",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.156",
pages = "1703--1714",
abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
}
``` | 0easy
|
Title: [code refactor] features -> KNN graph code in Datalab Issue Managers
Body: The KNN graph is currently being constructed in each IssueManager, via duplicated code from the other IssueManagers.
Goal: make this a helper method that is reused across the issue-managers | 0easy
|
Title: named styles
Body: The idea is to provide something a bit like [bootstrap's](https://getbootstrap.com/docs/4.0/components/buttons/) "primary", "secondary", "success" etc. to customise components without resorting the the sledge hammer of CSS.
I think we have:
* `primary` - default
* `secondary` - for "not the main thing to do"
* `warning` - for "this does bad/irreversible" stuff
That should be enough for now. | 0easy
|
Title: xxx.split('/')[-1] may lead to bug when you run eval.py in Windows
Body: When i first run eval.py just like the Quick Start said:
>>python ../../tools/eval.py \
--tracker_path ./results \ # result path
--dataset VOT2016 \ # dataset name
--num 1 \ # number thread to eval
--tracker_prefix 'model' # tracker_name
(ps: the argument should be model not 'model', there is no ' ')
i got an error:
loading VOT2016: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 60/60 [00:01<00:00, 49.93it/s, wiper]
eval ar: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:00<00:00, 1.97it/s]
eval eao: 0%| | 0/1 [00:00<?, ?it/s]multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "C:\Users\\---\\.conda\envs\pysot\lib\multiprocessing\pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "F:\\---\\pysot-master\toolkit\evaluation\eao_benchmark.py", line 47, in eval
eao = self._calculate_eao(tracker_name, self.tags)
File "F:\\---\\pysot-master\toolkit\evaluation\eao_benchmark.py", line 111, in _calculate_eao
max_len = max([len(x) for x in all_overlaps])
ValueError: max() arg is an empty sequence
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "../../tools/eval.py", line 147, in <module>
main()
File "../../tools/eval.py", line 129, in main
trackers), desc='eval eao', total=len(trackers), ncols=100):
File "C:\Users\\---\\.conda\envs\pysot\lib\site-packages\tqdm\std.py", line 1102, in __iter__
for obj in iterable:
File "C:\Users\\---\\.conda\envs\pysot\lib\multiprocessing\pool.py", line 748, in next
raise value
ValueError: max() arg is an empty sequence
eval eao: 0%| | 0/1 [00:00<?, ?it/s]
Finally, i found the cause of this bug.
trackers = [x.split('/')[-1] for x in trackers] in File "../../tools/eval.py", line 37, in main()
This code split tracker name incorrectly. the return value is 'results\\VOT2016\\model' while correct value should be 'model'.
Conclusion:
This code should change "x.split('/')" to "x.split('\\')" when you use Windows. Besides, i've not met any other bug caused by this, but i'm not sure that whether the other place uses x.split('/') may lead to bug or not. | 0easy
|
Title: Change Request Commits metric API
Body: The canonical definition is here: https://chaoss.community/?p=4731 | 0easy
|
Title: HTTP 500 on patch_group (KeyError)
Body: **Steps to reproduce**
_docker run -p 8888:8888 kinto/kinto-server_
Running kinto 14.0.1.dev0.
First request
```
POST /v1/buckets HTTP/1.1
Host: 127.0.0.1:8888
Authorization: Basic AAAA
Content-Type: application/json
Content-Length: 222
{"data": {"collection:schema": {}, "group:schema": {}, "record:schema": {}}, "permissions": {"collection:create": ["write_account"], "group:create": ["write_account"], "read": ["read_account"], "write": ["write_account"]}}
````
First response HTTP 201
```
{
"permissions": {
"read": [
"read_account"
],
"write": [
"write_account",
"account:admin"
],
"collection:create": [
"write_account"
],
"group:create": [
"write_account"
]
},
"data": {
"collection:schema": {},
"group:schema": {},
"record:schema": {},
"id": "6UULOFrV",
"last_modified": 1608558398847
}
}
````
Second request
```
PUT /v1/buckets/6UULOFrV/groups/0 HTTP/1.1
Host: 127.0.0.1:8888
User-Agent: python-requests/2.24.0
Accept-Encoding: gzip,deflate
Response-Behavior: diff
If-None-Match: "0"
Authorization: Basic AAAA
Content-Type: application/json
Content-Length: 83
{"data": {}, "permissions": {"read": ["read_account"], "write": ["write_account"]}}
````
Second response HTTP 201
```
{
"permissions": {
"read": [
"read_account"
],
"write": [
"write_account",
"account:admin"
]
},
"data": {
"members": [],
"id": "0",
"last_modified": 1608558411303
}
}
```
Third request
```
PATCH /v1/buckets/6UULOFrV/groups/0 HTTP/1.1
Host: 127.0.0.1:8888
User-Agent: python-requests/2.24.0
Accept-Encoding: gzip,deflate
Response-Behavior: diff
If-None-Match: "0"
Authorization: Basic AAAA
Content-Type: application/json
Content-Length: 83
{"data": {}, "permissions": {"read": ["read_account"], "write": ["write_account"]}}
```
Third response HTTP 500
```
{
"code": 500,
"errno": 999,
"error": "Internal Server Error",
"message": "A programmatic error occured, developers have been informed.",
"info": "https://github.com/Kinto/kinto/issues/"
}
```
Log
```
"PATCH /v1/buckets/6UULOFrV/groups/0?" ? (? ms) 'id' errno=999
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/pyramid/tweens.py", line 41, in excview_tween
response = handler(request)
File "/app/kinto/core/events.py", line 159, in tween
request.registry.notify(event)
File "/usr/local/lib/python3.7/site-packages/pyramid/registry.py", line 109, in notify
[_ for _ in self.subscribers(events, None)]
File "/usr/local/lib/python3.7/site-packages/zope/interface/registry.py", line 448, in subscribers
return self.adapters.subscribers(objects, provided)
File "/usr/local/lib/python3.7/site-packages/zope/interface/adapter.py", line 619, in subscribers
subscription(*objects)
File "/usr/local/lib/python3.7/site-packages/pyramid/config/adapters.py", line 129, in subscriber_wrapper
return derived_subscriber(*arg)
File "/usr/local/lib/python3.7/site-packages/pyramid/config/adapters.py", line 101, in derived_subscriber
return subscriber(arg[0])
File "/app/kinto/views/groups.py", line 79, in on_groups_changed
group_uri = f"/buckets/{event.payload['bucket_id']}/groups/{group['id']}"
KeyError: 'id'
"PATCH /v1/buckets/6UULOFrV/groups/0?" 500 (7 ms) agent=python-requests/2.24.0 authn_type=account errno=999 time=2020-12-21T12:47:57.573000 uid=admin
``` | 0easy
|
Title: Overwriting dirname causes test suite not to fail if snapshots unused
Body: **Describe the bug**
Within Home Assistant we have a SnapshotExtension to overwrite the default `__snapshots__` dir name with `snapshots`. But today I actually noticed that this had the side effect that it doesn't fail the test suite if there are unused snapshots in the snapshots folder.
```py
class HomeAssistantSnapshotExtension(AmberSnapshotExtension):
"""Home Assistant extension for Syrupy."""
VERSION = "1"
"""Current version of serialization format.
Need to be bumped when we change the HomeAssistantSnapshotSerializer.
"""
serializer_class: type[AmberDataSerializer] = HomeAssistantSnapshotSerializer
@classmethod
def dirname(cls, *, test_location: PyTestLocation) -> str:
"""Return the directory for the snapshot files.
Syrupy, by default, uses the `__snapshosts__` directory in the same
folder as the test file. For Home Assistant, this is changed to just
`snapshots` in the same folder as the test file, to match our `fixtures`
folder structure.
"""
test_dir = Path(test_location.filepath).parent
return str(test_dir.joinpath("snapshots"))
```
**To reproduce**
Add the SnapshotExtension. Have it generate a snapshot. Add a fake entry to the snapshot. Run the test suite.
**Expected behavior**
I'd expect it to fail the test suite because there are entries that are not found. I can imagine that we have quite a basic way of overwriting the directory and can imagine people with way more interesting setups for the snapshots which would probably break with the expected behaviour. So I would not be surprised if the failure is turned off when the dirname is overwritten, but I'd then at least expect to read something in a docstring or in the docs.
**Screenshots**
<!-- If applicable, add screenshots to help explain your problem. -->
**Environment (please complete the following information):**
- OS: mac OS
- Syrupy Version: 4.6.1
- Python Version: 3.12
**Additional context**
<!-- Add any other context about the problem here. -->
| 0easy
|
Title: [DOCS] Example usage in docstring
Body: As [discussed in #586](https://github.com/koaning/scikit-lego/pull/586#discussion_r1385475426), although most of the library features are documented in the user guide, the best way to showcase how to access and use each class/function would be a minimal example usage in the docstrings. And this is currently covered only for a subset of the library features.
Here the list of all the remaining classes that would benefit from it:
- [x] `preprocessing.outlier_remover.OutlierRemover` (solved in #639 )
- [x] `meta.outlier_classifier.OutlierClassifier` (solved in #646)
- [x] `preprocessing.dictmapper.DictMapper` (solved in #646)
- [x] `preprocessing.pandastransformers.PandasTypeSelector` (solved in #648)
- [x] `preprocessing.projections.InformationFilter` (solved in #648)
- [x] `preprocessing.repeatingbasis.RepeatingBasisFunction` (solved in #648)
- [x] `preprocessing.formulaictransformer.FormulaicTransformer` (solved in #648)
- [x] `preprocessing.identitytransformer.IdentityTransformer` (solved in #648)
- [x] #650
- [x] `linear_model.ProbWeightRegression` (solved in #691)
- [x] `linear_model.DeadZoneRegressor` (solved in #691)
- [x] `linear_model.DemographicParityClassifier` (solved in #691)
- [x] `linear_model.EqualOpportunityClassifier` (solved in #691)
- [x] #652
- [x] #653
- [x] `mixture.bayesian_gmm_classifier.BayesianGMMClassifier`
- [x] `mixture.bayesian_gmm_detector.BayesianGMMOutlierDetector`
- [x] `mixture.gmm_classifier.GMMClassifier`
- [x] `mixture.gmm_detector.GMMOutlierDetector`
- [ ] `model_selection.TimeGapSplit`
- [ ] `model_selection.GroupTimeSeriesSplit`
- [ ] `model_selection.KlusterFoldValidation`
- [ ] `naive_bayes.GaussianMixtureNB`
- [ ] `naive_bayes.BayesianGaussianMixtureNB`
- [ ] `neighbors.BayesianKernelDensityClassifier`
- [ ] `meta.confusion_balancer.ConfusionBalancer`
- [ ] `meta.estimator_transformer.EstimatorTransformer`
- [ ] `meta.grouped_predictor.GroupedPredictor`
- [ ] `meta.grouped_transformer.GroupedTransformer`
- [ ] `meta.regression_outlier_detector.RegressionOutlierDetector`
- [ ] `meta.subjective_classifier.SubjectiveClassifier`
- [ ] `meta.thresholder.Thresholder`
- [ ] `preprocessing.intervalencoder.IntervalEncoder`
As an instance of such minimal example you can refer to [`QuantileRegression` docstring section](https://github.com/koaning/scikit-lego/blob/main/sklego/linear_model.py#L1140), which renders as in its [API section](https://koaning.github.io/scikit-lego/api/linear-model/#sklego.linear_model.QuantileRegression).
If possible try to add **one** unique example covering the relevant features and methods in the top level docstring of the class. | 0easy
|
Title: artifacts in fill under curve for histograms in log mode
Body: ### Short description
Some histograms are not properly filled when log mode is enabled (for X or Y axis).
### Steps to reproduce
1. Run `histogram.py` from examples.
2. Right-click to bring up menu, select "Plot Options" -> "Transforms" -> "log X" or "log Y"'
Note that the dataset is randomly generated and it does not happen every single time. It does happen about 7 out of 10 times for me.
### Expected behavior
Histogram stepped-curve is properly filled all the way down to fill level.
### Real behavior
Fill is incomplete, cut off.

### Tested environment(s)
* PyQtGraph version: `0.12.4`
* Qt Python binding: `PySide6 6.2.1 Qt 6.2.1`
* Python version: `Python 3.9.7`
* NumPy version: `1.21.6`
* Operating system: `Linux fes 5.13.0-44-generic #49-Ubuntu SMP Wed May 18 13:28:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux`
* Installation method: `pip3 install -r requirements.txt`
```
pyqtgraph>=0.12.3
PySide6==6.2.1 # 6.2.2 throws exception with pyqtgraph == 0.12.3
cython>=0.29.24
psutil~=5.8.0
numpy~=1.21.4
sharedmem>=0.3.8
aiohttp~=3.8.1
scipy>=1.6.0
setuptools>=60.5.0
h5py>=3.6.0
Pillow>=9.0.0
aiortc==1.2.1
wheel>=0.34.2
fast-histogram>=0.10
```
### Additional context
| 0easy
|
Title: Native cron scheduler doesn't match convention
Body: 40 10 * * 1-5 is running Tuesday to Saturday rather than Monday to Friday. | 0easy
|
Title: [Documentation] Add tutorial about domain adaptation
Body: It is not a widely used concept that we have domain adaptation transforms. Requires a separate tutorial. | 0easy
|
Title: r2 metric report values
Body: When using r2 as eval metric for regression task (with 'Explain' mode) the metric values reported in Leaderboard (at README.md file) are multiplied by -1.
For instance, the metric value for some model shown in the Leaderboard is -0.41, while when clicking the model name leads to the detailed results page - and there the value of r2 is 0.41.
I've noticed that when one of R2 metric values in the Leaderboard was > 1 (~ 45), which shouldn't be possible for R2 metric, then I looked in the detailed results and saw that the value is actually ~ -45.
| 0easy
|
Title: Different behavior of `get_header` between urllib.Request and WrappedRequest
Body: <!--
Thanks for taking an interest in Scrapy!
If you have a question that starts with "How to...", please see the Scrapy Community page: https://scrapy.org/community/.
The GitHub issue tracker's purpose is to deal with bug reports and feature requests for the project itself.
Keep in mind that by filing an issue, you are expected to comply with Scrapy's Code of Conduct, including treating everyone with respect: https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md
The following is a suggested template to structure your issue, you can find more guidelines at https://doc.scrapy.org/en/latest/contributing.html#reporting-bugs
-->
### Description
I believe `WrappedRequest` is a class for supporting methods in `urllib.Request`, but I found a behavior of `get_header` method in `WrappedRequest` is different from `urllib.Requests`.
When trying to get a header which is not present without default value, it should return `None` value (https://docs.python.org/3/library/urllib.request.html#urllib.request.Request.get_header),
but `get_header` in `WrappedRequest` raises `TypeError: to_unicode must receive a bytes or str object, got NoneType`.
### Steps to Reproduce
```
from urllib.request import Request as _Request
from scrapy.http.request import Request
from scrapy.http.cookies import WrappedRequest
a = _Request(url="https://a.example")
print(a.get_header('xxxx'))
b = WrappedRequest(Request(url="https://a.example"))
print(b.get_header('xxxx'))
```
**Expected behavior:**
```
None
None
```
**Actual behavior:**
```
None
TypeError: to_unicode must receive a bytes or str object, got NoneType
```
### Additional context
This issue is currently not a problem when interacting with the `CookieJar` class.
Thus, it is not an urgent matter unless one is importing and using `WrappedRequest`.
However, I think it would enhance the reliability in this project.
Thank you! | 0easy
|
Title: intraday data ( extended history) docs
Body: with the current state of the API on python we are not able to extract intraday stock data from more than 2 months althought the alphavantage API already have a feature to get it.
It would be very helpfull if we can make it possible some way.
ty. | 0easy
|
Title: The cherry on top!
Body: Not an issue, but proof you (and contributors) nailed it:
https://youtu.be/wZRV2H4PK0Q
Fantastic video which shows:
- Great little history lesson regarding master-engineers, previously the guys who put tape to vinyl
- Incl. how the whole loudness-war started, parts I didn't knew, that built up to this
- Cherry on top: **Matchering 2.0 beating all other AI's**! (note: by carefully selecting a proper song as reference)
- Ranking it no.3 (out of 12) right behind 2 professional master engineers
- Seems a rock-solid study with 472 judged entries
FYI: It's also built-in to [UVR](https://ultimatevocalremover.com/) under Audio Tools > Matchering
If I were you, I'd put the video in your readme too. Something to show off. Proud of you guys ๐ฅ | 0easy
|
Title: dismiss "Black is not installed, parameters wont be formatted" message
Body: papermill shows a logging message if black is not installed, which is causing confusion for ploomber users.
https://github.com/nteract/papermill/blob/aecd0ac16d34dd41a1a8e4ca3351844932af3d94/papermill/translators.py#L195
we should suppress the message | 0easy
|
Title: Support autofocus
Body: For all our Inputs it makes total sense to support autofocus:
https://developer.mozilla.org/en-US/docs/Web/HTML/Global_attributes/autofocus
vuetify2 and 3 support it, and it's the HTML standard. | 0easy
|
Title: Add custom snippet path support
Body: ### Description
I would like to be able to have Marimo include a custom path to snippets that will load within the snippets list.
### Suggested solution
I imagine this would be provided in two ways:
1. Via the `pyproject.toml` file:
```toml
[project]
name = "marimo-example"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.11"
dependencies = []
custom_snippet_path = "./_snippets"
```
2. Via an ENV variable:
```
CUSTOM_SNIPPET_PATH=./_snippets marimo edit
```
This path would then be added to the `snippet_files` method in [marimo/_snippets/snippets.py](issues/new?template=feature_request.yaml) file:
```
def snippet_files() -> Generator[str, Any, None]:
root = os.path.realpath(
str(import_files("marimo").joinpath("_snippets").joinpath("data"))
)
for _root, _dirs, files in os.walk(root):
for file in files:
if file.endswith(".py"):
yield os.path.join(root, file)
```
### Alternative
I have considered using metaprogramming to override this method, but a core solution would be much better.
### Additional context
This will allow for massive adoption within my dev team and would allow us to share knowledge and snippets across different notebooks that are all within the same `marimo edit` server | 0easy
|
Title: ไฝณๆ-่ทๆญฅๆบ่ทๆญฅ่ฎฐๅฝๅฅฝๅๆฒกๆๅๆญฅ๏ผ
Body: ๆฏไธๆฏๅ ไธบ่ทๆญฅๆบๆฒกๆ่ฝจ่ฟน๏ผ | 0easy
|
Title: `logfire whoami` should respect the `LOGFIRE_TOKEN` env var.
Body: and I guess `pyproject.toml` and anywhere else we look for a token, e.g. it should have the same semantics in terms of finding a project as
```bash
python -c 'import logfire; logfire.info("hello world")'
``` | 0easy
|
Title: [DOC] Bring the Jupyter workaround tips elsewhere in the documentation
Body: Hey Pydantic, thank you for this project.
**Reference**
#144 raises the issue of being unable to run the example in the doc, later fixed by #214
**Doc improvement request**
#214 put the work around notice only in the "Agent" section, which makes it barely visible in the documentation as a whole. For instance, I ran into this issue while testing function tools, in the corresponding section.
Does it make sense to add this workaround notice to the main tutorial pages? I guess many new users will run into this, and some might not take the time to check your GitHub issue tracker.
Thanks | 0easy
|
Title: Always include `additionalProperties: True` for dictionary schemas
Body: ### Initial Checks
- [x] I have searched Google & GitHub for similar requests and couldn't find anything
- [x] I have read and followed [the docs](https://docs.pydantic.dev) and still think this feature is missing
### Description
Currently for a model like this
```py
from typing import Any, Dict
from pydantic import BaseModel
class Model(BaseModel):
conditions: Dict[str, Any]
print(Model.model_json_schema())
```
Pydantic will generate this JSON Schema
```json
{
"properties": {
"conditions": {
"title": "Conditions",
"type": "object"
}
},
"required": [
"conditions"
],
"title": "Model",
"type": "object"
}
```
I think it'd be a nice improvement to include `additionalProperties: True`, even though it is the spec default as it just makes the schemas easier to read / parse / validate imo, e.g.:
```json
{
"title": "Conditions",
"type": "object",
"additionalProperties": true,
}
```
I'm not aware of any downsides that being explicit here would bring? Apart from the potential for breaking changes?
### Affected Components
- [ ] [Compatibility between releases](https://docs.pydantic.dev/changelog/)
- [ ] [Data validation/parsing](https://docs.pydantic.dev/concepts/models/#basic-model-usage)
- [ ] [Data serialization](https://docs.pydantic.dev/concepts/serialization/) - `.model_dump()` and `.model_dump_json()`
- [x] [JSON Schema](https://docs.pydantic.dev/concepts/json_schema/)
- [ ] [Dataclasses](https://docs.pydantic.dev/concepts/dataclasses/)
- [ ] [Model Config](https://docs.pydantic.dev/concepts/config/)
- [ ] [Field Types](https://docs.pydantic.dev/api/types/) - adding or changing a particular data type
- [ ] [Function validation decorator](https://docs.pydantic.dev/concepts/validation_decorator/)
- [ ] [Generic Models](https://docs.pydantic.dev/concepts/models/#generic-models)
- [ ] [Other Model behaviour](https://docs.pydantic.dev/concepts/models/) - `model_construct()`, pickling, private attributes, ORM mode
- [ ] [Plugins](https://docs.pydantic.dev/) and integration with other tools - mypy, FastAPI, python-devtools, Hypothesis, VS Code, PyCharm, etc. | 0easy
|
Title: Add `secret` attribute to columns
Body: We have a `Secret` column type:
https://piccolo-orm.readthedocs.io/en/latest/piccolo/schema/column_types.html#secret
The reason this exists is to prevent inadvertent leakage of sensitive data. You can use it like this:
```python
class Property(Table):
address = Text()
code = Secret()
>>> await Property.select(exclude_secrets=True)
[{'address': 'Buckingham Palace'}]
```
`PiccoloCRUD` also supports it.
The issue is `Secret` is a subclass of `Varchar` - there's no way to have a secret `Integer` for example.
The change will involve adding a `secret` parameter to `Column` (and hence is accessible to all subclasses). The `Secret` column type will still be kept for backwards compatibility, but will be deprecated.
```python
class Property(Table):
address = Text()
code = Varchar(secret=True)
value = Integer(secret=True)
```
| 0easy
|
Title: Show warning if `batch` & `unbatch` is implemented but max_batch_size not set in `LitServer`
Body: ## ๐ Feature
<!-- A clear and concise description of the feature proposal -->
### Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
### Pitch
<!-- A clear and concise description of what you want to happen. -->
### Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
### Additional context
<!-- Add any other context or screenshots about the feature request here. -->
| 0easy
|
Title: [DOCS] Specifying R version in runtime.txt does not seem to be mentioned
Body: Hey ๐ I believe an R environment can now be described by the following in a `runtime.txt` file:
```
r-3.5-YYYY-MM-DD
```
But that doesn't seem to be reflected in the current documentation [here](https://repo2docker.readthedocs.io/en/latest/howto/languages.html#the-r-language)? | 0easy
|
Title: `Checkbox` Border Color When `is_on = False` Should Default to Secondary Color
Body: When `is_on = False` for the `Checkbox` component, the border color defaults to black. For better visual consistency, the border color should match the color of the filled checkbox, such as the secondary color by default.
<img width="122" alt="image" src="https://github.com/rio-labs/rio/assets/41641225/cc520d22-94c2-4139-bc69-f0dc48afb031">
<img width="67" alt="image" src="https://github.com/rio-labs/rio/assets/41641225/cf46f297-d37d-45c8-b7d9-ee6373d9754b">
| 0easy
|
Title: A application tutorial about Baysian Vector Autoregression?
Body: NumPyro is quite a good package about probabilistic programming, I have seen there is a tutorial about AR(2).
Cause VAR is quite important in economics, wish you and your team could post a tutorial about this. :D | 0easy
|
Title: `skimage.measure.find_contours` returns results outside masked areas
Body: ### Description:
The `skimage.measure.find_contours` function sometimes returns contours which are entirely outside the input array masked areas, and thus that should be filtered out. This was reported in the context of DataLab (see the [associated issue](https://github.com/Codra-Ingenierie-Informatique/DataLab/issues/34)).
In DataLab, the "detection contour" feature is based on this function and, following a bug report, a workaround was added to simply filter out all contours which coordinates correspond to masked pixels:
```python
contours = measure.find_contours(data, level=get_absolute_level(data, level))
coords = []
for contour in contours:
# `contour` is a (N, 2) array (rows, cols): we need to check if all those
# coordinates are masked: if so, we skip this contour
if isinstance(data, ma.MaskedArray) and np.all(
data.mask[contour[:, 0].astype(int), contour[:, 1].astype(int)]
):
continue
```
### Way to reproduce:
The test image may be found [here](https://github.com/Codra-Ingenierie-Informatique/DataLab/blob/main/cdl/data/tests/fabry-perot1.jpg).
The test script which reproduced the issue (before it was fixed by adding the functions mentioned above) is [here](https://github.com/Codra-Ingenierie-Informatique/DataLab/blob/main/cdl/tests/features/images/contour_fabryperot_app.py).
Please also take a look at the [associated issue](https://github.com/Codra-Ingenierie-Informatique/DataLab/issues/34) which describes the original problem.
### Version information:
```Shell
3.11.5 (tags/v3.11.5:cce6ba9, Aug 24 2023, 14:38:34) [MSC v.1936 64 bit (AMD64)]
Windows-10-10.0.22631-SP0
scikit-image version: 0.22.0
numpy version: 1.26.1
```
| 0easy
|
Title: Review usage of `covdefaults` and `.coveragerc`
Body: Review the current state of the `.coveragerc` config file. My guess is that it could probably be simplified since we are also using the `covdefaults` package. | 0easy
|
Title: Improve test coverage
Body: Currently, our test coverage is:
coverage: platform linux, python 3.10.12-final-0
Name Stmts Miss Cover
gpt_engineer/init.py 1 0 100%
gpt_engineer/cli/init.py 0 0 100%
gpt_engineer/cli/collect.py 32 6 81%
gpt_engineer/cli/file_selector.py 146 115 21%
gpt_engineer/cli/learning.py 114 58 49%
gpt_engineer/cli/main.py 62 62 0%
gpt_engineer/core/init.py 0 0 100%
gpt_engineer/core/ai.py 127 62 51%
gpt_engineer/core/chat_to_files.py 56 28 50%
gpt_engineer/core/db.py 44 5 89%
gpt_engineer/core/domain.py 4 0 100%
gpt_engineer/core/steps.py 144 102 29%
TOTAL 730 438 40%
Which apparently still lets bugs seep into our merges.
Any PRs with adequate tests are more than welcome. | 0easy
|
Title: Redirects importer does not clear its cache storage
Body: ### Issue Summary
Related: #12797. Reading the code of our temporary storage implementation, weโre [missing our `CACHE_PREFIX`](https://github.com/wagtail/wagtail/blob/main/wagtail/contrib/redirects/tmp_storages.py#L87-L91) when deleting entries:
```python
def read(self, read_mode="r"):
return cache.get(self.CACHE_PREFIX + self.name)
def remove(self):
cache.delete(self.name)
```
The cache entry will clear itself after 24 hours, so this isnโt exactly a big issue. `delete` fails silently, hence why this might not have been spotted in the past.
### Steps to Reproduce
Iโve not actually tried to reproduce this so this list of steps is a guess.
1. Configure the project to use a cache backend that can be inspected (for example DB cache or Redis)
2. Set `WAGTAIL_REDIRECTS_FILE_STORAGE = "cache"` in settings.
3. Go to the redirects management and bulk-import redirects. Here is a test file: [redirects.csv](https://github.com/thibaudcolas/bakerydemo-editor-guide/blob/main/bakerydemo/base/fixtures/redirects.csv).
4. After completing the import, inspect the cache to see if the redirectsโ temporary storage was cleared
- I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: no
### Technical details
N/A
### Working on this
<!--
Do you have thoughts on skills needed?
Are you keen to work on this yourself once the issue has been accepted?
Please let us know here.
-->
Anyone can contribute to this if they have experience with cache storages, in particular writing tests for this code. View our [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html), add a comment to the issue once youโre ready to start.
Our [tmp_storages.py](https://github.com/wagtail/wagtail/blob/main/wagtail/contrib/redirects/tmp_storages.py) comes [from django-import-export](https://github.com/django-import-export/django-import-export/blob/main/import_export/tmp_storages.py) | 0easy
|
Title: Add Instructions for Bundling (pyinstaller, py2exe or similar)
Body: If you're familiar with `pyinstaller` or similar, this issue is for you!
Given Rio's ability to create desktop apps, it would be very nice to have a tried and true method for bundling apps made with Rio. Create an app, turn it into an exe, and provide detailed instructions (and maybe a sample configuration file) for creating executables. | 0easy
|
Title: Constant for SQLITE_INDEX_CONSTRAINT_ISNULL is Incorrect
Body: Right now the constant for `SQLITE_INDEX_CONSTRAINT_ISNULL` is hardcoded `69` https://github.com/betodealmeida/shillelagh/blob/14579e4b8c3159adc4076b36638d13f00dc70609/src/shillelagh/backends/apsw/vt.py#L68-L69
however that value should be `71`. See https://www.sqlite.org/c3ref/c_index_constraint_eq.html
The ordering here is not what determines the constant value: https://github.com/rogerbinns/apsw/blob/f2649fc7b0f7e80a5f18fa60ae796d6fd9aaf214/src/apsw.c#L1531-L1533
This can all be confirmed in Python:
```
>>> apsw.SQLITE_INDEX_CONSTRAINT_ISNULL
71
``` | 0easy
|
Title: Improve bitwise operator support
Body: ### Discussed in https://github.com/sqlalchemy/sqlalchemy/discussions/11621
<div type='discussions-op-text'>
<sup>Originally posted by **nmoreaud** July 19, 2024</sup>
Hello
Do you know if bit operators are working fine with Oracle?
I have encountered an error: `ORA-00920: invalid relational operator\nHelp: https://docs.oracle.com/error-help/db/ora-00920/"`
In the logs, I see that the operator has been replaced in the query by `&`, which is not supported by my database (`bitand` is the right syntax).
```python
# SQLAlchemy==2.0.28
# CREATE TABLE ... (
# "UNIX_RIGHTS" NUMBER(3,0) DEFAULT 1,
# )
# unixrights= Column(SmallInteger, default=1)
unixrights= mapped_column('unixrights', SmallInteger, nullable=True, default=1)
filter = or_(User.unixrights.bitwise_and(1) != 0, User.unixrights.bitwise_and(2) != 0)
```</div> | 0easy
|
Title: Blank space at the bottom of the page
Body: **Server Info (please complete the following information):**
- OS: Ubuntu 20.04
- Browser: Chrome and Edge
- RMM Version (as shown in top left of web UI): 0.13.4
**Installation Method:**
- [X] Standard
- [ ] Docker
**Agent Info (please complete the following information):**
- Agent version (as shown in the 'Summary' tab of the agent from web UI):
- Agent OS: [e.g. Win 10 v2004, Server 2012 R2]
**Describe the bug**
When viewing the UI a persistent "bar" of empty space is at the bottom of the screen.
**To Reproduce**
Steps to reproduce the behavior:
1. Open RMM
**Expected behavior**
Content should fill the full screen
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.

| 0easy
|
Title: Mac OS่ฟ่กๅฏๆง่กๆไปถ
Body: ็ฌฌไธๆญฅใๆๅผ็ป็ซฏ
<img width="1820" alt="ๆชๅฑ2024-11-25 16 06 24" src="https://github.com/user-attachments/assets/e64ddc10-be88-4e8b-a117-1bbd5de82def">
็ฌฌไบๆญฅใ่พๅ
ฅsudo spctl --master-disableไผ่ฝฆใ่พๅ
ฅMac os็ๅผๆบๅฏ็ ๏ผๅฏ็ ๏ผไธๆพ็คบ๏ผใ
<img width="1021" alt="ๆชๅฑ2024-11-25 16 04 18" src="https://github.com/user-attachments/assets/66bad811-1b18-4f10-be09-c38eed16852a">
<img width="1021" alt="ๆชๅฑ2024-11-25 16 04 29" src="https://github.com/user-attachments/assets/bbb13a5d-b5a7-4c74-b3bd-921b3d328523">
็ฌฌไธๆญฅ๏ผ่ฟ่ฎพ็ฝฎ-้็งไธๅฎๅ
จๆงใ้ๆฉไปฅไธๆฅๆบ็ๅบ็จ็จๅบ-ไปปไฝๆฅๆบ
<img width="743" alt="ๆชๅฑ2024-11-25 16 05 28" src="https://github.com/user-attachments/assets/e1702b72-89fb-4f0c-bb2b-00873ea03a44">
็ฌฌๅๆญฅใๅๅปmain่ฟ่กๆไปถใไธ็ฅ้ไธบไปไนๆ่ฟ้ไผๆฅ้ใไฝๆฏ็นๅๆถๅๅฏไปฅๆๅผ็จๅบ๏ผ
<img width="1820" alt="ๆชๅฑ2024-11-25 16 10 16" src="https://github.com/user-attachments/assets/090859ed-6832-4e1c-bc4b-e2310f23bd15">
<img width="1465" alt="ๆชๅฑ2024-11-25 16 11 11" src="https://github.com/user-attachments/assets/4ebf6fe0-7b98-43e0-b36f-e9901e3f22b1">
| 0easy
|
Title: confusing error if wrong extension
Body: ploomber infers the type of task with a combination of the source and product extension, but if there's a typo a confusing error shows. example, if the user wants to dump data:
```yaml
- source: another.sql
product: another.parquets # typo here
```
error:
```pytb
ploomber.exceptions.SourceInitializationError: Error initializing SQLScriptSource('SELECT *\nFROM nu...ams"], "param") ]]') Loaded from: /Users/Edu/Desktop/sql/another.sql: The {{product}} placeholder is required. Example: 'CREATE TABLE {{product}} AS (SELECT * FROM ...)'
```
Fix: see if the value in `product` has something similar to a `.parquet` or `.csv` extension and show an error
| 0easy
|
Title: add inverse_transform method to variable transformer classes
Body: Add to the log, power and reciprocal transformers (https://github.com/solegalli/feature_engine/tree/master/feature_engine/transformation) an inverse_transform method that performs the inverse operation.
So for log it would be exp, for exp it would be log, for 1/x it would be 1/x
BoxCox and Yeojohnson, would continue without the inverse_transform. Not sure these operations can be reversed. | 0easy
|
Title: Improve CLI command suggestions
Body: We currently have a hardcoded list of strings to catch typos when executing a command: https://github.com/ploomber/ploomber/blob/ff808c83bc63ce8c1ae09b726f0bceeb6d9d786e/src/ploomber/cli/cli.py#L243
However, this isn't robust. pip implements it using difflib, which is part of the standard library:
https://github.com/pypa/pip/blob/5ab7c124db917625e55693773bcc224793d7ceed/src/pip/_internal/commands/help.py | 0easy
|
Title: Time parameters should accept `datetime.timedelta` instead of `int`
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Currently timeout arguments are ambiguous as it is not clear what the value passed corresponds to. Is it seconds? Milliseconds?
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
We should refactor time parameter to accept a `datetime.timdelta` instance. This would also give the control back to the user, allowing them to ergonomically use the units of their desire.
We should also still accept integers but raise a deprecation warning and then remove them in the next release.
| 0easy
|
Title: Google colab - Feature selection not working
Body: This is my setting: dataframe dataset, numerical values, Target is binary classification, I am trying to do feature selectection.
automl = AutoML(
mode = 'Compete',
eval_metric = 'f1',
validation_strategy = {"validation_type": "custom"},
results_path=folder+'automl_featsel2_'+subject_val,
explain_level = 1,
golden_features = False,
algorithms = ['Xgboost'],
features_selection = True,
stack_models = False,
hill_climbing_steps = 0,
top_models_to_improve = 5,
train_ensemble = False,
start_random_models = 1,
kmeans_features = False,
random_state = 42
)
Hello, I get the following warning when I fit:
log_loss_eps() got an unexpected keyword argument 'response_method'
Problem during computing permutation importance. Skipping ...
'module' object is not callable
Skip features_selection because no parameters were generated. | 0easy
|
Title: Display link to the Docs if error with flags
Body: ### Is your feature request related to a problem? Please describe.
when running:
ยดยดยด
interpreter --this-does-not-exist
```
### Describe the solution you'd like
when running:
ยดยดยด
interpreter --this-does-not-exist
```
Display the link to the docs
### Describe alternatives you've considered
_No response_
### Additional context
_No response_ | 0easy
|
Title: Allow using `pytest-recording` with a command-line flag only
Body: It seems like right now you need to define which functions will make use of `vcr` with a decorator. It would be great if there was a way to automatically enable `vcr` for all tests without requiring any change in the source code.
For example, having a flag like `pytest --vcr` would automatically enable VCR for all requests.
This way, tests would run in any environment without `vcr` nor `pytest-recording` installed (i.e.: CI/CD/deployment) and, for those wanting to accelerate local testing, it would be as simple as installing the packages and adding the `--vcr` flag.
Not sure this is even possible. :shrug: :smile: | 0easy
|
Title: Unit tests for aux io and moving to io for release 2.0
Body: I think before the 2.0 release we should unit test what we can for the aux io module and move what we can to the io module. | 0easy
|
Title: Test failure after initial setup for development
Body: ### System Info
OS version: Windows 10 Pro
Python version: 3.10.10
PandasAI version: 1.3.3
### ๐ Describe the bug
After a clean setup for initial development, notice that 2 of the tests are failing:
```bash
================================================================================ short test summary info =================================================================================
FAILED tests/test_smartdataframe.py::TestSmartDataframe::test_save_chart_non_default_dir - AssertionError: assert 'exports/char...b180ab5a5.png' == 'exports/char...b180ab5a5.png'
FAILED tests/callbacks/test_file.py::TestFileCallback::test_on_code - PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\Henrique\\Desktop\\area de trabalho\\pessoal\\repos\\pandas...
================================================================= 2 failed, 248 passed, 2 skipped, 40 warnings in 12.09s =================================================================
```
I'll give extra guidance on both errors.
## `test_on_code`
This is returning `none` for the `self.assertEqual(contents, response)` statements, although the strings compared are equal.
And why are mixed frameworks for testing? There are some tests using the built-in `unittest` module. Shouldn't it be standardized all to `pytest`?
## `test_save_chart_non_default_dir`
This may have a connection with the OS. The test is failing on the following assertion:
```python
assert (
plt_mock.savefig.call_args.args[0]
== f"exports/charts/{smart_dataframe.last_prompt_id}.png"
)
```
This is the best practice because the path is fixed for Unix-style systems. The problem is precisely in the fixed path variable.
```python
plt_mock.savefig.call_args.args[0] = 'exports/charts\\cf69927e-2ec6-4ae6-a063-0d3d30598328.png'
f"exports/charts/{smart_dataframe.last_prompt_id}.png" = 'exports/charts/cf69927e-2ec6-4ae6-a063-0d3d30598328.png'
```
@gventuri, would you like to let someone work on this? Or can I handle it? I prefer to let new contributors have a chance to work on it, especially the second one, which is a path and OS related problem. | 0easy
|
Title: Add the missing docstrings to the `testing_node.py` file
Body: Add the missing docstrings to the [testing_node.py](https://github.com/scanapi/scanapi/blob/main/scanapi/tree/testing_node.py) file
[Here](https://github.com/scanapi/scanapi/wiki/First-Pull-Request#7-make-your-changes) you can find instructions of how we create the [docstrings](https://www.python.org/dev/peps/pep-0257/#what-is-a-docstring).
Child of https://github.com/scanapi/scanapi/issues/411 | 0easy
|
Title: `TypedDict` with forward references do not work in argument conversion
Body: If we use a `TypedDict` like below as a type hint, type conversion is done for key `x` but not for `y`.
```python
from typing import TypedDict
class Example(TypedDict):
x: int
y: 'int'
```
This is different compared to functions and methods where argument conversion is done both for `x` and `y` if we have `def example(x: int, y: 'int')`. The reason is that with `TypedDict` the annotation assigned to `y` by Python is `ForwardRef`, while with functions and methods we get the original string. Our conversion logic handles strings as aliases but doesn't know that to do with `ForwardRef`s.
A simple fix for this issue is using `typing.get_type_hints` for getting types instead of accessing `__annotations__` like we currently do. That has a problem that using something like `x: 'int | float'` will cause an error with Python < 3.10. We just added support for such stringified types in arguments (#4711) and supporting them with `TypedDict` would be good as well. I guess we need to use `get_type_hints` first and then access `__annotations__` as a fall-back. Then we may get `ForwardRef`s and need to handle them as well. | 0easy
|
Title: Add a command to clean pipeline products
Body: If users want to free space from the disk, they have to manually delete artifacts, we could add something like:
```python
dag.clean()
dag['task'].clean()
```
to delete existing products + metadata
Note: should this delete remote copies if a client is configured? | 0easy
|
Title: [Feature request] Add apply_to_images to BaseDistortion
Body: | 0easy
|
Title: [Syntax] Examples Clarification
Body: ## Description of the problem, including code/CLI snippet
For some provided examples, initialization of a GitLab object instance appears to use lowercase characters. No where in the package is a difference in capitalization noted.
Example: [https://python-gitlab.readthedocs.io/en/stable/api-usage-advanced.html#rate-limits](Advanced Usage: Rate Limits) -
```python
import gitlab
import requests
# This object should be capitalized?
# |
# |
# |
# V
gl = gitlab.gitlab(url, token, api_version=4)
gl.projects.list(get_all=True, obey_rate_limit=False)
```
Given that Python is a case-sensitive language, `gitlab.Gitlab` != `gitlab.gitlab` unless it's explicitly aliased or noted that either will work.
## Expected Behavior
`N/A`
## Actual Behavior
`N/A`
## Specifications
- python-gitlab version: `All`
- API version you are using (v3/v4): `v4`
- Gitlab server version (or gitlab.com): `gitlab.com`
| 0easy
|
Title: [BUG] Validation fails in beanie 1.24.0 when dictionary key is an Enum
Body: **Describe the bug**
https://github.com/roman-right/beanie/pull/785 introduced a bug here: https://github.com/roman-right/beanie/blob/main/beanie/odm/utils/encoder.py#L134. Now validation fails if dictionary key is an `Enum`.
**To Reproduce**
```python
from beanie import Document, init_beanie
from motor.motor_asyncio import AsyncIOMotorClient
from typing import Dict
from enum import Enum
import asyncio
class CategoryEnum(str, Enum):
CHOCOLATE = "Chocolate"
CANDY = "Candy"
class Product(Document):
name: str
price: float
category: Dict[CategoryEnum, str]
async def example():
client = AsyncIOMotorClient(<mongodb_connection>)
await init_beanie(database=client["test_db"], document_models=[Product])
chocolate = {CategoryEnum.CHOCOLATE: "description of chocolate"}
tonybar = Product(name="Tony's", price=1.1, category=chocolate)
await tonybar.insert()
product = await Product.find_one()
print(product)
if __name__ == "__main__":
asyncio.run(example())
```
**Expected behavior**
With beanie 1.23.6 the script prints this on the command line:
````
id=ObjectId('659fc96a59f26f162d6cf1dc') revision_id=None name="Tony's" price=1.1 category={<CategoryEnum.CHOCOLATE: 'Chocolate'>: 'description of chocolate'}
````
With beanie 1.24.0:
```
Traceback (most recent call last):
File "C:some_path\beanie_bug.py", line 36, in <module>
asyncio.run(example())
File "C:\Users\JuhoSalminen\AppData\Local\Programs\Python\Python310\lib\asyncio\runners.py", line 44, in run
return loop.run_until_complete(main)
File "C:\Users\JuhoSalminen\AppData\Local\Programs\Python\Python310\lib\asyncio\base_events.py", line 649, in run_until_complete
return future.result()
File "C:some_path\beanie_bug.py", line 31, in example
product = await Product.find_one()
File "C:some_path\.venv\lib\site-packages\beanie\odm\queries\find.py", line 1023, in __await__
FindQueryResultType, parse_obj(self.projection_model, document)
File "C:some_path\.venv\lib\site-packages\beanie\odm\utils\parsing.py", line 104, in parse_obj
result = parse_model(model, data)
File "C:some_path\.venv\lib\site-packages\beanie\odm\utils\pydantic.py", line 39, in parse_model
return model_type.parse_obj(data)
File "pydantic\main.py", line 526, in pydantic.main.BaseModel.parse_obj
File "C:some_path\.venv\lib\site-packages\beanie\odm\documents.py", line 182, in __init__
super(Document, self).__init__(*args, **kwargs)
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for Product
category -> __key__
value is not a valid enumeration member; permitted: 'Chocolate', 'Candy' (type=type_error.enum; enum_values=[<CategoryEnum.CHOCOLATE: 'Chocolate'>, <CategoryEnum.CANDY: 'Candy'>])
```
Replacing line 134 in `beanie/odm/utils/encoder.py` with `return {key: self.encode(value) for key, value in obj.items()}` fixes the issue.
| 0easy
|
Title: Do not override already set env variables from `.env`
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Currently the `.env` variables take precedence over the system environment variables, this can cause issues as the Prisma CLI will use the system environment variables instead which could lead to migrations being applied to a different database if you have two different connection strings set.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
System environment variables should take priority.
## Additional context
Mentioned in #420. | 0easy
|
Title: [examples] Update example projects
Body: There two official example projects:
- [FastAPI Demo](https://github.com/roman-right/beanie-fastapi-demo) - Beanie and FastAPI collaboration demonstration. CRUD and Aggregation.
- [Indexes Demo](https://github.com/roman-right/beanie-index-demo) - Regular and Geo Indexes usage example wrapped to a microservice.
Both should:
- Show, how to use current Beanie syntax
- Contain unit tests | 0easy
|
Title: [๐ BUG] Favicon not being displayed using run(favicon="favicon.png")
Body: ### What went wrong? ๐ค
I cannot display a favicon even if I specify it in the run arguments
The favicon is 96x96 pixels in png
### Expected Behavior
My favicon to be displayed
### Steps to Reproduce Issue
```python
from taipy.gui import builder as tgb
from taipy.gui import Gui
from pathlib import Path
favicon_path = Path("favicon.png")
with tgb.Page() as page:
if favicon_path.exists():
tgb.text("The favicon exists ")
tgb.text("Where is my favicon?")
Gui(page,).run(debug=True, use_reloader=True, port=5111, favicon=str(favicon_path))
```
### Solution Proposed
_No response_
### Screenshots

### Runtime Environment
Ubuntu
### Browsers
Firefox
### OS
Linux
### Version of Taipy
3.1.1
### Additional Context
```bash
taipy==3.1.1
taipy-config==3.1.1
taipy-core==3.1.1
taipy-gui==3.1.4
taipy-rest==3.1.1
taipy-templates==3.1.1
```
### Acceptance Criteria
- [ ] Ensure new code is unit tested, and check code coverage is at least 90%.
- [ ] Create related issue in taipy-doc for documentation and Release Notes.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | 0easy
|
Title: Feature Request: Pivot Points
Body: Pivot points are used to notice importance levels during a trading session. I have used them successfully for day trading as they are solid inflection points.
Read more here - https://tradingsim.com/blog/pivot-points/
>
> Pivot Point (PP) = (Prior Daily High + Low + Close) / 3
> R1 = (2 x Pivot Point) โ Prior Daily Low
> R2 = Pivot Point + (Prior Daily High โ Prior Daily Low)
> S1 = (2 x Pivot Point) โ Prior Daily High
> S2 = Pivot Point โ (Prior Daily High โ Prior Daily Low)
> R3 = Daily High + 2 x (Pivot Point โ Prior Daily Low)
> S3 = Daily Low โ 2 x (Prior Daily High โ Pivot Point)
>
Since it is typically computed on daily timeframe (and above), yfinance data can be used as a default.
My python skills are limited so I cannot code it as a Pandas TA custom indicator. That said, it seems pretty easy enough that I can compute it in pandas. It would just be super convenient if it was part of Pandas TA. I will update this issue with the pandas code when I get it done. | 0easy
|
Title: Remove deprecated code from `geometry.conversions`
Body: The [kornia.geometry.conversion](https://github.com/kornia/kornia/blob/master/kornia/geometry/conversions.py#L411) has deprecated code that can be cleaned up
Aside from removing the `order` deprecated code, we also need to update the test to not ignore warnings on this code, for example https://github.com/kornia/kornia/blob/master/test/geometry/test_conversions.py#L50 | 0easy
|
Title: [UX] Terminating jobs controller with that is on a non-existing kubernetes cluster fails with `-p`
Body: To reproduce:
1. `sky local up`
2. `sky jobs launch --cloud kubernetes --cpus 1 echo hi`
3. `sky down sky-jobs-controller-xxx -p` fails to remove the entry from the table | 0easy
|
Title: Problem with pandas_ta
Body: Hi all, I'm a newbie with Python and I have a problem with the following code.
Line 2 doesn't work.
`import pandas as pd
import pandas_ta as ta
df = pd.DataFrame()
help(df.ta)`
I previously installed pandas_ta from cmd window with the following command
pip install pandas_ta
This is the output error
`ImportError Traceback (most recent call last)
Cell In[8], [line 7](vscode-notebook-cell:?execution_count=8&line=7)
[3](vscode-notebook-cell:?execution_count=8&line=3) import pandas as pd
[5](vscode-notebook-cell:?execution_count=8&line=5) # Importing the pandas_ta library
[6](vscode-notebook-cell:?execution_count=8&line=6) # and giving it an alias 'ta'
----> [7](vscode-notebook-cell:?execution_count=8&line=7) import pandas_ta as ta
[9](vscode-notebook-cell:?execution_count=8&line=9) # Creating an empty dataframe
[10](vscode-notebook-cell:?execution_count=8&line=10) df = pd.DataFrame()
File c:\Users\Alberto\AppData\Local\Programs\Python\Python313\Lib\site-packages\pandas_ta\__init__.py:116
[97](file:///C:/Users/Alberto/AppData/Local/Programs/Python/Python313/Lib/site-packages/pandas_ta/__init__.py:97) EXCHANGE_TZ = {
[98](file:///C:/Users/Alberto/AppData/Local/Programs/Python/Python313/Lib/site-packages/pandas_ta/__init__.py:98) "NZSX": 12, "ASX": 11,
[99](file:///C:/Users/Alberto/AppData/Local/Programs/Python/Python313/Lib/site-packages/pandas_ta/__init__.py:99) "TSE": 9, "HKE": 8, "SSE": 8, "SGX": 8,
(...)
[102](file:///C:/Users/Alberto/AppData/Local/Programs/Python/Python313/Lib/site-packages/pandas_ta/__init__.py:102) "BMF": -2, "NYSE": -4, "TSX": -4
[103](file:///C:/Users/Alberto/AppData/Local/Programs/Python/Python313/Lib/site-packages/pandas_ta/__init__.py:103) }
[105](file:///C:/Users/Alberto/AppData/Local/Programs/Python/Python313/Lib/site-packages/pandas_ta/__init__.py:105) RATE = {
[106](file:///C:/Users/Alberto/AppData/Local/Programs/Python/Python313/Lib/site-packages/pandas_ta/__init__.py:106) "DAYS_PER_MONTH": 21,
[107](file:///C:/Users/Alberto/AppData/Local/Programs/Python/Python313/Lib/site-packages/pandas_ta/__init__.py:107) "MINUTES_PER_HOUR": 60,
(...)
[113](file:///C:/Users/Alberto/AppData/Local/Programs/Python/Python313/Lib/site-packages/pandas_ta/__init__.py:113) "YEARLY": 1,
[114](file:///C:/Users/Alberto/AppData/Local/Programs/Python/Python313/Lib/site-packages/pandas_ta/__init__.py:114) }
--> [116](file:///C:/Users/Alberto/AppData/Local/Programs/Python/Python313/Lib/site-packages/pandas_ta/__init__.py:116) from pandas_ta.core import *
...
----> [2](file:///C:/Users/Alberto/AppData/Local/Programs/Python/Python313/Lib/site-packages/pandas_ta/momentum/squeeze_pro.py:2) from numpy import NaN as npNaN
[3](file:///C:/Users/Alberto/AppData/Local/Programs/Python/Python313/Lib/site-packages/pandas_ta/momentum/squeeze_pro.py:3) from pandas import DataFrame
[4](file:///C:/Users/Alberto/AppData/Local/Programs/Python/Python313/Lib/site-packages/pandas_ta/momentum/squeeze_pro.py:4) from pandas_ta.momentum import mom
ImportError: cannot import name 'NaN' from 'numpy' (c:\Users\Alberto\AppData\Local\Programs\Python\Python313\Lib\site-packages\numpy\__init__.py)
Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?0448cd00-f5e4-423c-b40b-405e0cab2743) or open in a [text editor](command:workbench.action.openLargeOutput?0448cd00-f5e4-423c-b40b-405e0cab2743). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)...`
Coudl you help me? Thank you!
| 0easy
|
Title: Ensure ClassificationScoreVisualizers subclasses return score and add tests
Body: Extended from #361 to be more specific. In #407 we reworked the `ClassificationScoreVisualizer` to save the score in a `score_` property, then return it from the method after drawing. We still need to update the subclasses to either call `super()` or ensure that they implement the behavior described above.
- [ ] `ClassPredictionError` should return an `self.estimator.score()` but already stores `score_`; requires test to ensure it returns a score between 0 and 1.
- [ ] `ClassificationReport` should return `self.estimator.score()` but already stores `score_`; requires test to ensure it returns a score between 0 and 1.
- [ ] `ConfusionMatrix` should return `self.estimator.score()` and store `score_` (it keeps its references in a `confusion_matrix_` property. Requires a test to ensure it returns a score between 0 and 1.
- [ ] `ROCAUC` already returns a score property. Check to make sure it has a test for the score. | 0easy
|
Title: Elephant factor metric API
Body: The canonical definition is here: https://chaoss.community/?p=3940 | 0easy
|
Title: Add benchmark test for open_datatree
Body: ### What is your issue?
To prevent regressions in the future, we should add a benchmark test for opening a deeply nested data tree.
This is to follow up on #9014 @aladinor's performance improvements.
You can see here how we benchmark opening and loading a single netCDF file
https://github.com/pydata/xarray/blob/447e5a3d16764a880387d33d0b5938393e167817/asv_bench/benchmarks/dataset_io.py#L125
| 0easy
|
Title: List of smaller extraction bugs (text & metadata)
Body: I have mostly tested `trafilatura` on a set of English, German and French web pages I had run into by surfing or during web crawls. There are definitely further web pages and cases in other languages for which the extraction doesn't work so far.
Corresponding bug reports can either be filed as a list in an issue like this one or in the code as XPath expressions in [xpaths.py](https://github.com/adbar/trafilatura/blob/master/trafilatura/xpaths.py) (see `BODY_XPATH` and `COMMENTS_XPATH` lists).
Thanks! | 0easy
|
Title: Comparison between AnyUrl objects is not working
Body: ### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
It's no longer possible to sort AnyUrl objects. This worked in 2.9, like so for example:
```
+ annotated-types==0.7.0
+ pydantic==2.9.0
+ pydantic-core==2.23.2
+ typing-extensions==4.12.2
+ tzdata==2024.2
```
### Example Code
```Python
from pydantic import AnyUrl
first_url = AnyUrl("https://a.com")
second_url = AnyUrl("https://b.com")
try:
sorted([second_url, first_url])
except TypeError as e:
print(f"sorting failed: {e}")
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.10.3
pydantic-core version: 2.27.1
pydantic-core build: profile=release pgo=false
install path: /Users/arthur/<project>/.venv/lib/python3.12/site-packages/pydantic
python version: 3.12.7 (main, Oct 7 2024, 23:45:10) [Clang 18.1.8 ]
platform: macOS-14.7.1-arm64-arm-64bit
related packages: pydantic-settings-2.6.1 typing_extensions-4.12.2
commit: unknown
```
| 0easy
|
Title: Add TorchScript backend
Body: | 0easy
|
Title: can you support smc/bos/choch smart money indicator
Body: can you support smc/bos/choch smart money indicator
https://www.tradingview.com/script/CnB3fSph-Smart-Money-Concepts-LuxAlgo/ | 0easy
|
Title: Surprising empty output in skimage.feature.hog()
Body: ### Description:
If the image is too small given the chosen `pixels_per_cell` and `cells_per_block`, an empty array is produced as output. [This is a surprising output.](https://stackoverflow.com/q/76681370/7328782)
I suggest you produce a descriptive error message instead.
Alternatively, make this behavior clear in the documentation. The documentation could, by the way, show how to compute the expected output size of the array, as this is not obvious from the outside.
### Way to reproduce:
```python
import numpy as np
import skimage
img = np.zeros((16, 193))
print(skimage.feature.hog(img))
```
### Version information:
I'm using scikit-image version 0.21.0.
[The remainder of the information is irrelevant, I can see [in the source code where this is happening](https://github.com/scikit-image/scikit-image/blob/76dd2dab856f95e5264f50e376f9f3a81a261287/skimage/feature/_hog.py#L277).]
| 0easy
|
Title: Command to run already built image
Body: Often, I would like to play with what volumes are mounted, or what command is executed, without having to rebuild the docker image.
This takes a long time when working on a JupyterLab extension that depends on JupyterLab master and so has to rebuild all of JupyterLab every image build.
Currently, I am looking for the image name manually with `docker image ls` and running it manually. This isn't ideal, because then it isn't run in the same way as `repo2docker` is running it. You pass some extra options on start, like a port and the custom display URL. I would like to keep all these nicetities and not have to hunt in the image list for the most recent one.
I thought this is what the `--no-build` argument would do, but I guess it not only doesn't build but doesn't start. For example, [`docker-compose up`](https://docs.docker.com/compose/reference/up/) has a `--no-build` argument that won't build the images, but will start up the services. | 0easy
|
Title: Unnecessary dependency on FuzzyTM pulls in many libraries
Body: #### Problem description
I'm trying to upgrade to the new Gensim 4.3.0 release. My colleague @juhoinkinen noticed in https://github.com/NatLibFi/Annif/pull/660 that Gensim 4.3.0 pulls in more dependencies than the previous release 4.2.0, including pandas. I suspect that at least the FuzzyTM dependency (which in turn pulls in pandas) is actually unused and thus unnecessary.
#### Steps/code/corpus to reproduce
Installing Gensim 4.2.0 into an empty venv (only four packages installed):
```
$ pip install gensim==4.2.0
Collecting gensim==4.2.0
Downloading gensim-4.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (24.0 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 24.0/24.0 MB 2.0 MB/s eta 0:00:00
Collecting scipy>=0.18.1
Downloading scipy-1.10.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (34.4 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 34.4/34.4 MB 3.3 MB/s eta 0:00:00
Collecting numpy>=1.17.0
Downloading numpy-1.24.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.3 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 17.3/17.3 MB 10.6 MB/s eta 0:00:00
Collecting smart-open>=1.8.1
Downloading smart_open-6.3.0-py3-none-any.whl (56 kB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 56.8/56.8 KB 9.7 MB/s eta 0:00:00
Installing collected packages: smart-open, numpy, scipy, gensim
Successfully installed gensim-4.2.0 numpy-1.24.1 scipy-1.10.0 smart-open-6.3.0
```
Installing Gensim 4.3.0 into an empty venv (18 packages installed):
```
$ pip install gensim==4.3.0
Collecting gensim==4.3.0
Downloading gensim-4.3.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (24.1 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 24.1/24.1 MB 6.9 MB/s eta 0:00:00
[...skipping downloads...]
Installing collected packages: pytz, urllib3, smart-open, six, numpy, idna, charset-normalizer, certifi, scipy, requests, python-dateutil, simpful, pandas, miniful, fst-pso, pyfume, FuzzyTM, gensim
Running setup.py install for miniful ... done
Running setup.py install for fst-pso ... done
Successfully installed FuzzyTM-2.0.5 certifi-2022.12.7 charset-normalizer-2.1.1 fst-pso-1.8.1 gensim-4.3.0 idna-3.4 miniful-0.0.6 numpy-1.24.1 pandas-1.5.2 pyfume-0.2.25 python-dateutil-2.8.2 pytz-2022.7 requests-2.28.1 scipy-1.10.0 simpful-2.9.0 six-1.16.0 smart-open-6.3.0 urllib3-1.26.13
```
The size of the venv has grown from 249MB to 318MB, an increase of 69MB.
Here is what `pipdeptree` shows - FuzzyTM appears to be the main reason why so many libraries are pulled in:
```
gensim==4.3.0
- FuzzyTM [required: >=0.4.0, installed: 2.0.5]
- numpy [required: Any, installed: 1.24.1]
- pandas [required: Any, installed: 1.5.2]
- numpy [required: >=1.21.0, installed: 1.24.1]
- python-dateutil [required: >=2.8.1, installed: 2.8.2]
- six [required: >=1.5, installed: 1.16.0]
- pytz [required: >=2020.1, installed: 2022.7]
- pyfume [required: Any, installed: 0.2.25]
- fst-pso [required: Any, installed: 1.8.1]
- miniful [required: Any, installed: 0.0.6]
- numpy [required: >=1.12.0, installed: 1.24.1]
- scipy [required: >=1.0.0, installed: 1.10.0]
- numpy [required: >=1.19.5,<1.27.0, installed: 1.24.1]
- numpy [required: Any, installed: 1.24.1]
- numpy [required: Any, installed: 1.24.1]
- scipy [required: Any, installed: 1.10.0]
- numpy [required: >=1.19.5,<1.27.0, installed: 1.24.1]
- simpful [required: Any, installed: 2.9.0]
- numpy [required: >=1.12.0, installed: 1.24.1]
- requests [required: Any, installed: 2.28.1]
- certifi [required: >=2017.4.17, installed: 2022.12.7]
- charset-normalizer [required: >=2,<3, installed: 2.1.1]
- idna [required: >=2.5,<4, installed: 3.4]
- urllib3 [required: >=1.21.1,<1.27, installed: 1.26.13]
- scipy [required: >=1.0.0, installed: 1.10.0]
- numpy [required: >=1.19.5,<1.27.0, installed: 1.24.1]
- scipy [required: Any, installed: 1.10.0]
- numpy [required: >=1.19.5,<1.27.0, installed: 1.24.1]
- numpy [required: >=1.18.5, installed: 1.24.1]
- scipy [required: >=1.7.0, installed: 1.10.0]
- numpy [required: >=1.19.5,<1.27.0, installed: 1.24.1]
- smart-open [required: >=1.8.1, installed: 6.3.0]
pip==22.0.2
pipdeptree==2.3.3
setuptools==59.6.0
```
It appears that the FuzzyTM dependency was added in PR #3398 (Flsamodel) by @ERijck . The first commits in this PR depended on the library, but a subsequent commit 9fec00b32d281e795f3b4701bf11fa1c97780227 reworked the code so it doesn't need to import FuzzyTM at all. But the dependency in setup.py wasn't actually removed, it's still there: https://github.com/RaRe-Technologies/gensim/blob/f35faae7a7b0c3c8586fb61208560522e37e0e7e/setup.py#L347
I think the FuzzyTM dependency could be safely dropped, as the library is not actually imported. It would reduce the number of libraries Gensim pulls in and thus reduce the size of installations, including Docker images where minimal size is often required.
#### Versions
I'm using Ubuntu Linux 22.04.
Linux-5.15.0-56-generic-x86_64-with-glibc2.35
Python 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0]
Bits 64
NumPy 1.24.1
SciPy 1.10.0
gensim 4.3.0
FAST_VERSION 0 | 0easy
|
Title: [BUG] Environment variables from .env file ignored in Dash.run
Body: **Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 2.17.1
dash-bootstrap-components 1.6.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
**Describe the bug**
Environment variables `HOST`, `PORT` and `DASH_PROXY` are ignored if defined in a .env file and loaded using `python-dotenv`.
In `Dash.run`, environment variables used as default argument values are evaluated at the time the method is defined, not when it is called. This can lead to discrepancies if the environment variables change between definition and execution.
Specifically, this means that environment variables specified in a .env file and loaded using the python-dotenv package will be ignored if they are loaded after the method definition. Since load_dotenv() is typically called after importing Dash, this issue affects many standard usage patterns.
The [docs](https://dash.plotly.com/reference#app.run) for `app.run` say "If a parameter can be set by an environment variable, that is listed too. Values provided here take precedence over environment variables." It doesn't mention that an .env file cannot be used.
**Expected behavior**
Arguments which can be set using environment variables should reflect variables from the .env file (loaded using `python-dotenv`).
**Proposed solution**
In `Dash.app` (`dash.py`), set `host`, `port` and `proxy` to `None` by default.
```python
def run(
self,
host=None,
port=None,
proxy=None,
...
```
Then use
```python
host = host or os.getenv("HOST", "127.0.0.1")
port = port or os.getenv("PORT", "8050")
proxy = proxy or os.getenv("DASH_PROXY", None)
```
at the start of the function itself to read the env vars when the method is called. | 0easy
|
Title: [FEA] GFQL benchmark notebook and GPU examples
Body: **Is your feature request related to a problem? Please describe.**
I added discussion of GFQL GPU support to the main readme and doc strings, but we should have some accessible ipynb's
**Describe the solution you'd like**
- [x] **simple** ipynb showing a before/after speedup of gfql (e.g., 4 very short & simple cells)
- mention using hop() for more speedups for simpler task
- clear 10X+ win on the benchmark
- show it works just by passing in cudf.DataFrame, and optionally, setting `engine='cudf'`
- [x] also cleaned up version of our bigger benchmark one: https://colab.research.google.com/drive/1iuH9YWd3VLSALR-3z1Jt35MXELI3Sjk_#scrollTo=bK4C9Ly0hso-
- [x] linked in the gfql section(s) in the readme.md as appropriate
**Additional context**
Let's land cucat first.. | 0easy
|
Title: Refactor pages
Body: We currently have three pages apart from the landing page
- The inverse kinematics page
- The kinematics page
- The leg patterns page
All of them have the following:
```python
GRAPH_NAME = "graph-hexapod-patterns"
ID_MESSAGE_DISPLAY_SECTION = "display-message-patterns"
SECTION_MESSAGE_DISPLAY = html.Div(id=ID_MESSAGE_DISPLAY_SECTION)
ID_POSES_SECTION = "hexapod-poses-values-patterns"
SECTION_HIDDEN_JOINT_POSES = html.Div(id=ID_POSES_SECTION, style={"display": "none"})
SECTION_CONTROLS = [
SECTION_DIMENSION_CONTROL,
*insert_widget_section_here*,
SECTION_MESSAGE_DISPLAY,
SECTION_HIDDEN_BODY_DIMENSIONS,
*insert_hidden_parameters_here*,
]
```
They also have something like this
```python
OUTPUT_MESSAGE_DISPLAY = Output(ID_MESSAGE_DISPLAY_SECTION, "children")
INPUT_POSES_JSON = Input(*insert parameter section_id_here*, "children")
OUTPUTS = [Output(GRAPH_NAME, "figure"), OUTPUT_MESSAGE_DISPLAY]
INPUTS = [INPUT_DIMENSIONS_JSON, INPUT_POSES_JSON]
STATES = [State(GRAPH_NAME, "relayoutData"), State(GRAPH_NAME, "figure")]
```
We can refactor this into the shared module | 0easy
|
Title: Nicer UI when running migrations
Body: When running migrations, the output is a bit ugly - there must be a nicer approach, which is more informative.
<img width="754" alt="Screenshot 2021-09-16 at 16 11 02" src="https://user-images.githubusercontent.com/350976/133637875-a5e3e32c-5303-4890-9033-df55f3bd7c70.png">
| 0easy
|
Title: [3.0] Export Action should support multiple resources too
Body: When using import-export 3.0, the export action in the changelist does not allow resource selection.
I think we should add the support to complete the multi resource feature.
https://github.com/django-import-export/django-import-export/blob/033f803c5994ceba9da8b610819ee5b52a630bf7/import_export/admin.py#L489
@matthewhegarty @PetrDlouhy what do you think?
| 0easy
|
Title: FIxing algolia search button on mobile
Body: On mobile screens, users need to click on the magnifier button on the right, and then again on the algolia magnifier button that appears. This is a bit confusing, furthermore, the button isn't aligned.
I'm not sure what the best solution is here, so open to suggestions.

| 0easy
|
Title: [BUG] Unable to scroll in jupyterlab
Body: **Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Please describe the steps needed to reproduce the behavior. For example:
1. Open up binder [exercise link](https://mybinder.org/v2/gh/lux-org/lux-binder/master?urlpath=lab/tree/exercise) from Github page
2. Open 0-Lux-Overview.ipynb
3. Execute first 3 code cells
**Expected behavior**
Text at the bottom of the viz says 'Scroll for 12 more charts.' There is no scroll bar. When I click on the arrows, nothing happens. When I hover over the arrows, they change shades, but clicking them does nothing.
**Screenshots**

**Additional context**
Add any other context about the problem here.
| 0easy
|
Title: [Feature request] Add apply_to_images to MultiplicativeNoise
Body: | 0easy
|
Title: VertexAIRegion List Incorrect
Body: ### Initial Checks
- [x] I confirm that I'm using the latest version of Pydantic AI
### Description
In the documentation, I think the [VertexAIRegion](https://ai.pydantic.dev/api/models/vertexai/#pydantic_ai.models.vertexai.VertexAiRegion) list should point to the following url instead: [Generative AI on Vertex AI locations](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/locations#genai-locations), and remove regions not listed on that site.
When I was working on implementing the VertexAIModel in my project I wanted to use `us-west2` as it is the region closest to me with the lowest latency, however in doing so I get the following error message:
```
pydantic_ai.exceptions.UnexpectedModelBehavior: Unexpected response from gemini 404, body:
{
"error": {
"code": 404,
"message": "Publisher Model `projects/gen-lang-client-0404328887/locations/us-west2/publishers/google/models/gemini-2.0-flash` not found.",
"status": "NOT_FOUND"
}
}
```
What I've found is you have to use a region listed here: [Generative AI on Vertex AI locations](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/locations#genai-locations). In my case, using `us-west4` I do not get any errors.
Maybe I'm doing something wrong?
### Example Code
```Python
```
### Python, Pydantic AI & LLM client version
```Text
Python 3.10.12
pydantic-ai-0.0.30
Gemini via VertexAI
``` | 0easy
|
Title: Make the custom `Base64` type compatible with OpenAPI generation
Body: <!--
Thanks for helping us improve Prisma Client Python! ๐ Please follow the sections in the template and provide as much information as possible about your problem, e.g. by enabling additional logging output.
See https://prisma-client-py.readthedocs.io/en/stable/reference/logging/ for how to enable additional logging output.
-->
## Bug description
<!-- A clear and concise description of what the bug is. -->
It is currently not possible to define a FastAPI endpoint that includes our custom `Base64` type, https://github.com/RobertCraigie/prisma-client-py/issues/318#issuecomment-1214218519.
## How to reproduce
<!--
Steps to reproduce the behavior:
1. Go to '...'
2. Change '....'
3. Run '....'
4. See error
-->
See https://github.com/RobertCraigie/prisma-client-py/issues/318#issuecomment-1214393759
## Expected behaviour
<!-- A clear and concise description of what you expected to happen. -->
FastAPI should generate an OpenAPI spec converting `Base64` inputs to a `string`.
| 0easy
|
Title: Endpoint to get Pull Requests for a given repo within a given time frame
Body: It would be good to have an endpoint in Augur that gets pull requests for a given repo within a given time frame. It would make Augur metrics more uniform and usable for open source health purposes. Similar endpoints to this already exist :
- [Time series of number of new issues opened during a certain period](https://oss-augur.readthedocs.io/en/main/rest-api/api.html#operation/Issues%20New%20(Repo))
- [Time series of number of new contributors during a certain period](https://oss-augur.readthedocs.io/en/main/rest-api/api.html#operation/New%20Contributors%20(Repo))
Another api endpoint similar to the ones listed can be created except for Pull Requests instead of Issues.
| 0easy
|
Title: Support argument conversion and named arguments with dynamic variable files
Body: If we have a dynamic variable file like
```python
def get_variables(a: int = 0, b: float = 0.0):
...
```
using argument conversion like
```
Variables example.py 42 3.14
```
and also using named arguments like
```
Variables example.py b=${3.14}
```
doesn't work.
This is different to library imports, keywords, and also to extensions like listeners. We have the needed functionality to support argument conversion and named arguments available and hooking it up with variable files is easy. Let's do that!
| 0easy
|
Title: [Refactoring] Use PeerID instead of DHTID
Body: Currently, hivemind.dht uses a specialized identity data structure called [DHTID](https://github.com/learning-at-home/hivemind/blob/master/hivemind/dht/routing.py#L250).
This data strucuture is very similar to another one called [PeerID](https://github.com/learning-at-home/hivemind/blob/2f07a556e6e014073a1c9836e7bd53039bc4c433/hivemind/p2p/p2p_daemon_bindings/datastructures.py#L39), which is used for libp2p interaction.
It appears that we can significantly simplify hivemind.dht by reusing PeerID from libp2p bindings. For that reason, we should
* [ ] figure out if it is possible to compute PeerID as a hash function of arbitrary sequence of bytes
* for DHT to work properly, the PeerID obtained from byte sequences must have the same "probability distribution" as those obtained by spawning P2Pd instances
* [ ] Figure out if we can remove DHTID class in favor of PeerID in all DHT functions
* [ ] Figure out if we can simplify hivemnd/dht/routing_table.py to remove dictionary-like structure
* Simplify hivemind/proto/dht.proto
* [ ] Figure out if we can remove NodeInfo from hivemind/proto/dht.proto(and .peer from all protocols)
* [ ] Figure out if we can remove neared_node_ids from FindResult
* [ ] other ideas on how to simplify DHT? | 0easy
|
Title: Remove f-string for < 3.5 support
Body: As discovered in https://community.plot.ly/t/jupyterlab-dash-extension/22922, an f-string slipped into the code base which prevents the use of Python 3.5.
https://github.com/plotly/jupyterlab-dash/blob/031a46f527010180b2d2e621e634e260ea545b8b/jupyterlab_dash/__init__.py#L86 | 0easy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.