mirror of
https://github.com/OutlineFoundation/outline-server.git
synced 2026-05-13 05:52:04 +00:00
Fix lint
This commit is contained in:
parent
c764fef8dd
commit
3034e59f9f
46 changed files with 1058 additions and 676 deletions
10
.editorconfig
Normal file
10
.editorconfig
Normal file
|
|
@ -0,0 +1,10 @@
|
|||
root = true
|
||||
|
||||
[*]
|
||||
charset = utf-8
|
||||
indent_size = 2
|
||||
indent_style = space
|
||||
trim_trailing_whitespace = true
|
||||
|
||||
[*.md]
|
||||
trim_trailing_whitespace = false
|
||||
3
.prettierignore
Normal file
3
.prettierignore
Normal file
|
|
@ -0,0 +1,3 @@
|
|||
/build/
|
||||
node_modules/
|
||||
/src/server_manager/messages/
|
||||
5
.prettierrc
Normal file
5
.prettierrc
Normal file
|
|
@ -0,0 +1,5 @@
|
|||
{
|
||||
"singleQuote": true,
|
||||
"bracketSpacing": false,
|
||||
"printWidth": 100
|
||||
}
|
||||
|
|
@ -20,4 +20,4 @@ again.
|
|||
All submissions, including submissions by project members, require review. We
|
||||
use GitHub pull requests for this purpose. Consult
|
||||
[GitHub Help](https://help.github.com/articles/about-pull-requests/) for more
|
||||
information on using pull requests.
|
||||
information on using pull requests.
|
||||
|
|
|
|||
21
README.md
21
README.md
|
|
@ -29,35 +29,35 @@ The system comprises the following components:
|
|||
|
||||
See [`src/metrics_server`](src/metrics_server)
|
||||
|
||||
|
||||
## Code Prerequisites
|
||||
|
||||
In order to build and run the code, you need the following installed:
|
||||
- [Node](https://nodejs.org/en/download/) LTS (`lts/gallium`, version `16.13.0`)
|
||||
- [NPM](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) (version `8.1.0`)
|
||||
- Manager-specic
|
||||
- [Wine](https://www.winehq.org/download), if you would like to generate binaries for Windows.
|
||||
- Server-specific
|
||||
- [Docker](https://docs.docker.com/engine/install/), to build the Docker image and to run the integration test.
|
||||
- [docker-compose](https://docs.docker.com/compose/install/), to run the integration test.
|
||||
|
||||
- [Node](https://nodejs.org/en/download/) LTS (`lts/gallium`, version `16.13.0`)
|
||||
- [NPM](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) (version `8.1.0`)
|
||||
- Manager-specic
|
||||
- [Wine](https://www.winehq.org/download), if you would like to generate binaries for Windows.
|
||||
- Server-specific
|
||||
- [Docker](https://docs.docker.com/engine/install/), to build the Docker image and to run the integration test.
|
||||
- [docker-compose](https://docs.docker.com/compose/install/), to run the integration test.
|
||||
|
||||
> 💡 NOTE: if you have `nvm` installed, run `nvm use` to switch to the correct node version!
|
||||
|
||||
Install dependencies with:
|
||||
|
||||
```sh
|
||||
npm install
|
||||
```
|
||||
|
||||
This project uses [NPM workspaces](https://docs.npmjs.com/cli/v7/using-npm/workspaces/).
|
||||
|
||||
|
||||
## Build System
|
||||
|
||||
We have a very simple build system based on package.json scripts that are called using `npm run`
|
||||
and a thin wrapper for what we call build "actions".
|
||||
|
||||
We've defined a package.json script called `action` whose parameter is a relative path:
|
||||
|
||||
```shell
|
||||
npm run action $ACTION
|
||||
```
|
||||
|
|
@ -77,6 +77,7 @@ It also defines two environmental variables:
|
|||
### Build output
|
||||
|
||||
Building creates the following directories under `build/`:
|
||||
|
||||
- `web_app/`: The Manager web app.
|
||||
- `static/`: The standalone web app static files. This is what one deploys to a web server or runs with Electron.
|
||||
- `electron_app/`: The launcher desktop Electron app
|
||||
|
|
@ -88,11 +89,13 @@ Building creates the following directories under `build/`:
|
|||
- `shadowbox`: The Proxy Server
|
||||
|
||||
The directories have subdirectories for intermediate output:
|
||||
|
||||
- `ts/`: Autogenerated Typescript files
|
||||
- `js/`: The output from compiling Typescript code
|
||||
- `browserified/`: The output of browserifying the JavaScript code
|
||||
|
||||
To clean up:
|
||||
|
||||
```
|
||||
npm run clean
|
||||
```
|
||||
|
|
|
|||
|
|
@ -1,8 +1,6 @@
|
|||
{
|
||||
"spec_dir": ".",
|
||||
"spec_files": [
|
||||
"build/js/**/*.spec.js"
|
||||
],
|
||||
"spec_files": ["build/js/**/*.spec.js"],
|
||||
"stopSpecOnExpectationFailure": false,
|
||||
"random": false
|
||||
}
|
||||
|
|
|
|||
|
|
@ -21,6 +21,7 @@
|
|||
"action:help": "npm run action",
|
||||
"action:list": "npm run action",
|
||||
"clean": "rm -rf src/*/node_modules/ build/ node_modules/ src/server_manager/install_scripts/do_install_script.ts src/server_manager/install_scripts/gcp_install_script.ts third_party/shellcheck/download/",
|
||||
"format": "prettier \"**/*.{cjs,html,js,json,md,ts}\" --write",
|
||||
"lint": "npm run lint:sh && npm run lint:ts",
|
||||
"lint:sh": "bash ./scripts/shellcheck.sh",
|
||||
"lint:ts": "npx tslint 'src/**/*.ts' -e '**/node_modules/**'",
|
||||
|
|
@ -31,7 +32,7 @@
|
|||
],
|
||||
"husky": {
|
||||
"hooks": {
|
||||
"pre-commit": "npm run lint && npx git-clang-format && npx pretty-quick --staged --pattern '**/*.html'"
|
||||
"pre-commit": "npm run lint && npx git-clang-format && npx pretty-quick --staged --pattern \"**/*.{cjs,html,js,json,md,ts}\""
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -12,7 +12,7 @@ The metrics server deploys two services: `dev`, used for development testing and
|
|||
|
||||
The metrics server supports two URL paths:
|
||||
|
||||
* `POST /connections`: report server data usage broken down by user.
|
||||
- `POST /connections`: report server data usage broken down by user.
|
||||
|
||||
```
|
||||
{
|
||||
|
|
@ -26,23 +26,24 @@ The metrics server supports two URL paths:
|
|||
}]
|
||||
}
|
||||
```
|
||||
* `POST /features`: report feature usage.
|
||||
|
||||
```
|
||||
{
|
||||
serverId: string,
|
||||
serverVersion: string,
|
||||
timestampUtcMs: number,
|
||||
dataLimit: {
|
||||
enabled: boolean
|
||||
perKeyLimitCount: number
|
||||
}
|
||||
}
|
||||
```
|
||||
- `POST /features`: report feature usage.
|
||||
|
||||
```
|
||||
{
|
||||
serverId: string,
|
||||
serverVersion: string,
|
||||
timestampUtcMs: number,
|
||||
dataLimit: {
|
||||
enabled: boolean
|
||||
perKeyLimitCount: number
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Requirements
|
||||
|
||||
* [Google Cloud SDK](https://cloud.google.com/sdk/)
|
||||
- [Google Cloud SDK](https://cloud.google.com/sdk/)
|
||||
|
||||
## Build
|
||||
|
||||
|
|
@ -60,26 +61,26 @@ npm run action metrics_server/start
|
|||
|
||||
## Deploy
|
||||
|
||||
* Authenticate with `gcloud`:
|
||||
- Authenticate with `gcloud`:
|
||||
```sh
|
||||
gcloud auth login
|
||||
```
|
||||
* To deploy to dev:
|
||||
- To deploy to dev:
|
||||
```sh
|
||||
npm run action metrics_server/deploy_dev
|
||||
```
|
||||
* To deploy to prod:
|
||||
- To deploy to prod:
|
||||
```sh
|
||||
npm run action metrics_server/deploy_prod
|
||||
```
|
||||
|
||||
## Test
|
||||
|
||||
* Unit test
|
||||
- Unit test
|
||||
```sh
|
||||
npm run action metrics_server/test
|
||||
```
|
||||
* Integration test
|
||||
- Integration test
|
||||
```sh
|
||||
npm run action metrics_server/test_integration
|
||||
```
|
||||
|
|
|
|||
|
|
@ -12,12 +12,16 @@
|
|||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import {ConnectionRow, isValidConnectionMetricsReport, postConnectionMetrics} from './connection_metrics';
|
||||
import {
|
||||
ConnectionRow,
|
||||
isValidConnectionMetricsReport,
|
||||
postConnectionMetrics,
|
||||
} from './connection_metrics';
|
||||
import {InsertableTable} from './infrastructure/table';
|
||||
import {HourlyConnectionMetricsReport} from './model';
|
||||
|
||||
class FakeConnectionsTable implements InsertableTable<ConnectionRow> {
|
||||
public rows: ConnectionRow[]|undefined;
|
||||
public rows: ConnectionRow[] | undefined;
|
||||
|
||||
async insert(rows: ConnectionRow[]) {
|
||||
this.rows = rows;
|
||||
|
|
@ -37,7 +41,7 @@ describe('postConnectionMetrics', () => {
|
|||
userId: 'uid1',
|
||||
countries: ['EC'],
|
||||
bytesTransferred: 456,
|
||||
}
|
||||
},
|
||||
];
|
||||
const report = {serverId: 'id', startUtcMs: 1, endUtcMs: 2, userReports};
|
||||
await postConnectionMetrics(table, report);
|
||||
|
|
@ -48,7 +52,7 @@ describe('postConnectionMetrics', () => {
|
|||
endTimestamp: new Date(report.endUtcMs).toISOString(),
|
||||
userId: userReports[0].userId,
|
||||
bytesTransferred: userReports[0].bytesTransferred,
|
||||
countries: userReports[0].countries
|
||||
countries: userReports[0].countries,
|
||||
},
|
||||
{
|
||||
serverId: report.serverId,
|
||||
|
|
@ -56,8 +60,8 @@ describe('postConnectionMetrics', () => {
|
|||
endTimestamp: new Date(report.endUtcMs).toISOString(),
|
||||
userId: userReports[1].userId,
|
||||
bytesTransferred: userReports[1].bytesTransferred,
|
||||
countries: userReports[1].countries
|
||||
}
|
||||
countries: userReports[1].countries,
|
||||
},
|
||||
];
|
||||
expect(table.rows).toEqual(rows);
|
||||
});
|
||||
|
|
@ -67,7 +71,7 @@ describe('isValidConnectionMetricsReport', () => {
|
|||
it('returns true for valid report', () => {
|
||||
const userReports = [
|
||||
{userId: 'uid0', countries: ['US', 'UK'], bytesTransferred: 123},
|
||||
{userId: 'uid1', countries: ['EC'], bytesTransferred: 456}
|
||||
{userId: 'uid1', countries: ['EC'], bytesTransferred: 456},
|
||||
];
|
||||
const report = {serverId: 'id', startUtcMs: 1, endUtcMs: 2, userReports};
|
||||
expect(isValidConnectionMetricsReport(report)).toBeTruthy();
|
||||
|
|
@ -78,13 +82,13 @@ describe('isValidConnectionMetricsReport', () => {
|
|||
it('returns false for inconsistent timestamp values', () => {
|
||||
const userReports = [
|
||||
{userId: 'uid0', countries: ['US', 'UK'], bytesTransferred: 123},
|
||||
{userId: 'uid1', countries: ['EC'], bytesTransferred: 456}
|
||||
{userId: 'uid1', countries: ['EC'], bytesTransferred: 456},
|
||||
];
|
||||
const invalidReport = {
|
||||
serverId: 'id',
|
||||
startUtcMs: 999, // startUtcMs > endUtcMs
|
||||
startUtcMs: 999, // startUtcMs > endUtcMs
|
||||
endUtcMs: 1,
|
||||
userReports
|
||||
userReports,
|
||||
};
|
||||
expect(isValidConnectionMetricsReport(invalidReport)).toBeFalsy();
|
||||
});
|
||||
|
|
@ -93,19 +97,20 @@ describe('isValidConnectionMetricsReport', () => {
|
|||
{
|
||||
userId: 'uid0',
|
||||
countries: ['US', 'UK'],
|
||||
bytesTransferred: -123 // Should not be negative
|
||||
bytesTransferred: -123, // Should not be negative
|
||||
},
|
||||
{userId: 'uid1', countries: ['EC'], bytesTransferred: 456}
|
||||
{userId: 'uid1', countries: ['EC'], bytesTransferred: 456},
|
||||
];
|
||||
const invalidReport = {serverId: 'id', startUtcMs: 1, endUtcMs: 2, userReports};
|
||||
expect(isValidConnectionMetricsReport(invalidReport)).toBeFalsy();
|
||||
|
||||
const userReports2 = [
|
||||
{userId: 'uid0', countries: ['US', 'UK'], bytesTransferred: 123}, {
|
||||
{userId: 'uid0', countries: ['US', 'UK'], bytesTransferred: 123},
|
||||
{
|
||||
userId: 'uid1',
|
||||
countries: ['EC'],
|
||||
bytesTransferred: 2 * Math.pow(2, 40) // 2TB is above the server capacity
|
||||
}
|
||||
bytesTransferred: 2 * Math.pow(2, 40), // 2TB is above the server capacity
|
||||
},
|
||||
];
|
||||
const invalidReport2 = {serverId: 'id', startUtcMs: 1, endUtcMs: 2, userReports: userReports2};
|
||||
expect(isValidConnectionMetricsReport(invalidReport2)).toBeFalsy();
|
||||
|
|
@ -123,19 +128,19 @@ describe('isValidConnectionMetricsReport', () => {
|
|||
serverId: 'id',
|
||||
startUtcMs: 1,
|
||||
endUtcMs: 2,
|
||||
userReports: [] // Should not be empty
|
||||
userReports: [], // Should not be empty
|
||||
};
|
||||
expect(isValidConnectionMetricsReport(invalidReport2)).toBeFalsy();
|
||||
|
||||
const userReports = [
|
||||
{userId: 'uid0', countries: ['US', 'UK'], bytesTransferred: 123},
|
||||
{userId: 'uid1', countries: ['EC'], bytesTransferred: 456}
|
||||
{userId: 'uid1', countries: ['EC'], bytesTransferred: 456},
|
||||
];
|
||||
const invalidReport3 = {
|
||||
// Missing `serverId`
|
||||
startUtcMs: 1,
|
||||
endUtcMs: 2,
|
||||
userReports
|
||||
userReports,
|
||||
};
|
||||
expect(isValidConnectionMetricsReport(invalidReport3)).toBeFalsy();
|
||||
|
||||
|
|
@ -143,7 +148,7 @@ describe('isValidConnectionMetricsReport', () => {
|
|||
// Missing `startUtcMs`
|
||||
serverId: 'id',
|
||||
endUtcMs: 2,
|
||||
userReports
|
||||
userReports,
|
||||
};
|
||||
expect(isValidConnectionMetricsReport(invalidReport4)).toBeFalsy();
|
||||
|
||||
|
|
@ -151,7 +156,7 @@ describe('isValidConnectionMetricsReport', () => {
|
|||
// Missing `endUtcMs`
|
||||
serverId: 'id',
|
||||
startUtcMs: 2,
|
||||
userReports
|
||||
userReports,
|
||||
};
|
||||
expect(isValidConnectionMetricsReport(invalidReport5)).toBeFalsy();
|
||||
});
|
||||
|
|
@ -160,26 +165,30 @@ describe('isValidConnectionMetricsReport', () => {
|
|||
{
|
||||
// Missing `userId`
|
||||
countries: ['US', 'UK'],
|
||||
bytesTransferred: 123
|
||||
bytesTransferred: 123,
|
||||
},
|
||||
{userId: 'uid1', countries: ['EC'], bytesTransferred: 456}
|
||||
{userId: 'uid1', countries: ['EC'], bytesTransferred: 456},
|
||||
];
|
||||
const invalidReport = {serverId: 'id', startUtcMs: 1, endUtcMs: 2, userReports};
|
||||
expect(isValidConnectionMetricsReport(invalidReport)).toBeFalsy();
|
||||
|
||||
const userReports2 = [{
|
||||
// Missing `countries`
|
||||
userId: 'uid0',
|
||||
bytesTransferred: 123
|
||||
}];
|
||||
const userReports2 = [
|
||||
{
|
||||
// Missing `countries`
|
||||
userId: 'uid0',
|
||||
bytesTransferred: 123,
|
||||
},
|
||||
];
|
||||
const invalidReport2 = {serverId: 'id', startUtcMs: 1, endUtcMs: 2, userReports: userReports2};
|
||||
expect(isValidConnectionMetricsReport(invalidReport2)).toBeFalsy();
|
||||
|
||||
const userReports3 = [{
|
||||
// Missing `bytesTransferred`
|
||||
userId: 'uid0',
|
||||
countries: ['US', 'UK'],
|
||||
}];
|
||||
const userReports3 = [
|
||||
{
|
||||
// Missing `bytesTransferred`
|
||||
userId: 'uid0',
|
||||
countries: ['US', 'UK'],
|
||||
},
|
||||
];
|
||||
const invalidReport3 = {serverId: 'id', startUtcMs: 1, endUtcMs: 2, userReports: userReports3};
|
||||
expect(isValidConnectionMetricsReport(invalidReport3)).toBeFalsy();
|
||||
});
|
||||
|
|
@ -188,27 +197,27 @@ describe('isValidConnectionMetricsReport', () => {
|
|||
serverId: 'id',
|
||||
startUtcMs: 1,
|
||||
endUtcMs: 2,
|
||||
userReports: [1, 2, 3] // Should be `HourlyUserConnectionMetricsReport[]`
|
||||
userReports: [1, 2, 3], // Should be `HourlyUserConnectionMetricsReport[]`
|
||||
};
|
||||
expect(isValidConnectionMetricsReport(invalidReport)).toBeFalsy();
|
||||
|
||||
const userReports = [
|
||||
{userId: 'uid0', countries: ['US', 'UK'], bytesTransferred: 123},
|
||||
{userId: 'uid1', countries: ['EC'], bytesTransferred: 456}
|
||||
{userId: 'uid1', countries: ['EC'], bytesTransferred: 456},
|
||||
];
|
||||
const invalidReport2 = {
|
||||
serverId: 987, // Should be a string
|
||||
serverId: 987, // Should be a string
|
||||
startUtcMs: 1,
|
||||
endUtcMs: 2,
|
||||
userReports
|
||||
userReports,
|
||||
};
|
||||
expect(isValidConnectionMetricsReport(invalidReport2)).toBeFalsy();
|
||||
|
||||
const invalidReport3 = {
|
||||
serverId: 'id',
|
||||
startUtcMs: '100', // Should be a number
|
||||
startUtcMs: '100', // Should be a number
|
||||
endUtcMs: 200,
|
||||
userReports
|
||||
userReports,
|
||||
};
|
||||
expect(isValidConnectionMetricsReport(invalidReport3)).toBeFalsy();
|
||||
|
||||
|
|
@ -216,36 +225,40 @@ describe('isValidConnectionMetricsReport', () => {
|
|||
// Missing `startUtcMs`
|
||||
serverId: 'id',
|
||||
startUtcMs: 1,
|
||||
endUtcMs: '200', // Should be a number
|
||||
userReports
|
||||
endUtcMs: '200', // Should be a number
|
||||
userReports,
|
||||
};
|
||||
expect(isValidConnectionMetricsReport(invalidReport4)).toBeFalsy();
|
||||
});
|
||||
it('returns false for incorrect user report field types ', () => {
|
||||
const userReports = [
|
||||
{
|
||||
userId: 1234, // Should be a string
|
||||
userId: 1234, // Should be a string
|
||||
countries: ['US', 'UK'],
|
||||
bytesTransferred: 123
|
||||
bytesTransferred: 123,
|
||||
},
|
||||
{userId: 'uid1', countries: ['EC'], bytesTransferred: 456}
|
||||
{userId: 'uid1', countries: ['EC'], bytesTransferred: 456},
|
||||
];
|
||||
const invalidReport = {serverId: 'id', startUtcMs: 1, endUtcMs: 2, userReports};
|
||||
expect(isValidConnectionMetricsReport(invalidReport)).toBeFalsy();
|
||||
|
||||
const userReports2 = [{
|
||||
userId: 'uid0',
|
||||
countries: [1, 2, 3], // Should be string[]
|
||||
bytesTransferred: 123
|
||||
}];
|
||||
const userReports2 = [
|
||||
{
|
||||
userId: 'uid0',
|
||||
countries: [1, 2, 3], // Should be string[]
|
||||
bytesTransferred: 123,
|
||||
},
|
||||
];
|
||||
const invalidReport2 = {serverId: 'id', startUtcMs: 1, endUtcMs: 2, userReports: userReports2};
|
||||
expect(isValidConnectionMetricsReport(invalidReport2)).toBeFalsy();
|
||||
|
||||
const userReports3 = [{
|
||||
userId: 'uid0',
|
||||
countries: ['US', 'UK'],
|
||||
bytesTransferred: '1234', // Should be a number
|
||||
}];
|
||||
const userReports3 = [
|
||||
{
|
||||
userId: 'uid0',
|
||||
countries: ['US', 'UK'],
|
||||
bytesTransferred: '1234', // Should be a number
|
||||
},
|
||||
];
|
||||
const invalidReport3 = {serverId: 'id', startUtcMs: 1, endUtcMs: 2, userReports: userReports3};
|
||||
expect(isValidConnectionMetricsReport(invalidReport3)).toBeFalsy();
|
||||
});
|
||||
|
|
|
|||
|
|
@ -18,8 +18,8 @@ import {HourlyConnectionMetricsReport, HourlyUserConnectionMetricsReport} from '
|
|||
|
||||
export interface ConnectionRow {
|
||||
serverId: string;
|
||||
startTimestamp: string; // ISO formatted string.
|
||||
endTimestamp: string; // ISO formatted string.
|
||||
startTimestamp: string; // ISO formatted string.
|
||||
endTimestamp: string; // ISO formatted string.
|
||||
userId: string;
|
||||
bytesTransferred: number;
|
||||
countries: string[];
|
||||
|
|
@ -34,7 +34,9 @@ export class BigQueryConnectionsTable implements InsertableTable<ConnectionRow>
|
|||
}
|
||||
|
||||
export function postConnectionMetrics(
|
||||
table: InsertableTable<ConnectionRow>, report: HourlyConnectionMetricsReport) {
|
||||
table: InsertableTable<ConnectionRow>,
|
||||
report: HourlyConnectionMetricsReport
|
||||
) {
|
||||
return table.insert(getConnectionRowsFromReport(report));
|
||||
}
|
||||
|
||||
|
|
@ -49,7 +51,7 @@ function getConnectionRowsFromReport(report: HourlyConnectionMetricsReport): Con
|
|||
endTimestamp: endTimestampStr,
|
||||
userId: userReport.userId,
|
||||
bytesTransferred: userReport.bytesTransferred,
|
||||
countries: userReport.countries
|
||||
countries: userReport.countries,
|
||||
});
|
||||
}
|
||||
return rows;
|
||||
|
|
@ -57,8 +59,9 @@ function getConnectionRowsFromReport(report: HourlyConnectionMetricsReport): Con
|
|||
|
||||
// Returns true iff testObject contains a valid HourlyConnectionMetricsReport.
|
||||
// tslint:disable-next-line:no-any
|
||||
export function isValidConnectionMetricsReport(testObject: any):
|
||||
testObject is HourlyConnectionMetricsReport {
|
||||
export function isValidConnectionMetricsReport(
|
||||
testObject: any
|
||||
): testObject is HourlyConnectionMetricsReport {
|
||||
if (!testObject) {
|
||||
return false;
|
||||
}
|
||||
|
|
@ -77,8 +80,11 @@ export function isValidConnectionMetricsReport(testObject: any):
|
|||
}
|
||||
|
||||
// Check timestamp types and that startUtcMs is not after endUtcMs.
|
||||
if (typeof testObject.startUtcMs !== 'number' || typeof testObject.endUtcMs !== 'number' ||
|
||||
testObject.startUtcMs >= testObject.endUtcMs) {
|
||||
if (
|
||||
typeof testObject.startUtcMs !== 'number' ||
|
||||
typeof testObject.endUtcMs !== 'number' ||
|
||||
testObject.startUtcMs >= testObject.endUtcMs
|
||||
) {
|
||||
return false;
|
||||
}
|
||||
|
||||
|
|
@ -89,7 +95,7 @@ export function isValidConnectionMetricsReport(testObject: any):
|
|||
|
||||
const requiredUserReportFields = ['userId', 'countries', 'bytesTransferred'];
|
||||
const MIN_BYTES_TRANSFERRED = 0;
|
||||
const MAX_BYTES_TRANSFERRED = 1 * Math.pow(2, 40); // 1 TB.
|
||||
const MAX_BYTES_TRANSFERRED = 1 * Math.pow(2, 40); // 1 TB.
|
||||
for (const userReport of testObject.userReports) {
|
||||
// Test that each `userReport` contains the required fields.
|
||||
for (const fieldName of requiredUserReportFields) {
|
||||
|
|
@ -103,9 +109,11 @@ export function isValidConnectionMetricsReport(testObject: any):
|
|||
}
|
||||
|
||||
// Check that `bytesTransferred` is a number between min and max transfer limits
|
||||
if (typeof userReport.bytesTransferred !== 'number' ||
|
||||
userReport.bytesTransferred < MIN_BYTES_TRANSFERRED ||
|
||||
userReport.bytesTransferred > MAX_BYTES_TRANSFERRED) {
|
||||
if (
|
||||
typeof userReport.bytesTransferred !== 'number' ||
|
||||
userReport.bytesTransferred < MIN_BYTES_TRANSFERRED ||
|
||||
userReport.bytesTransferred > MAX_BYTES_TRANSFERRED
|
||||
) {
|
||||
return false;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ import {InsertableTable} from './infrastructure/table';
|
|||
import {DailyFeatureMetricsReport} from './model';
|
||||
|
||||
class FakeFeaturesTable implements InsertableTable<FeatureRow> {
|
||||
public rows: FeatureRow[]|undefined;
|
||||
public rows: FeatureRow[] | undefined;
|
||||
|
||||
async insert(rows: FeatureRow[]) {
|
||||
this.rows = rows;
|
||||
|
|
@ -31,15 +31,17 @@ describe('postFeatureMetrics', () => {
|
|||
serverId: 'id',
|
||||
serverVersion: '0.0.0',
|
||||
timestampUtcMs: 123456,
|
||||
dataLimit: {enabled: false}
|
||||
dataLimit: {enabled: false},
|
||||
};
|
||||
await postFeatureMetrics(table, report);
|
||||
const rows: FeatureRow[] = [{
|
||||
serverId: report.serverId,
|
||||
serverVersion: report.serverVersion,
|
||||
timestamp: new Date(report.timestampUtcMs).toISOString(),
|
||||
dataLimit: report.dataLimit
|
||||
}];
|
||||
const rows: FeatureRow[] = [
|
||||
{
|
||||
serverId: report.serverId,
|
||||
serverVersion: report.serverVersion,
|
||||
timestamp: new Date(report.timestampUtcMs).toISOString(),
|
||||
dataLimit: report.dataLimit,
|
||||
},
|
||||
];
|
||||
expect(table.rows).toEqual(rows);
|
||||
});
|
||||
});
|
||||
|
|
@ -50,7 +52,7 @@ describe('isValidFeatureMetricsReport', () => {
|
|||
serverId: 'id',
|
||||
serverVersion: '0.0.0',
|
||||
timestampUtcMs: 123456,
|
||||
dataLimit: {enabled: true}
|
||||
dataLimit: {enabled: true},
|
||||
};
|
||||
expect(isValidFeatureMetricsReport(report)).toBeTruthy();
|
||||
});
|
||||
|
|
@ -59,7 +61,7 @@ describe('isValidFeatureMetricsReport', () => {
|
|||
serverId: 'id',
|
||||
serverVersion: '0.0.0',
|
||||
timestampUtcMs: 123456,
|
||||
dataLimit: {enabled: true, perKeyLimitCount: 1}
|
||||
dataLimit: {enabled: true, perKeyLimitCount: 1},
|
||||
};
|
||||
expect(isValidFeatureMetricsReport(report)).toBeTruthy();
|
||||
});
|
||||
|
|
@ -68,7 +70,7 @@ describe('isValidFeatureMetricsReport', () => {
|
|||
serverId: 'id',
|
||||
serverVersion: '0.0.0',
|
||||
timestampUtcMs: 123456,
|
||||
dataLimit: {enabled: true, perKeyLimitCount: -1}
|
||||
dataLimit: {enabled: true, perKeyLimitCount: -1},
|
||||
};
|
||||
expect(isValidFeatureMetricsReport(report)).toBeFalsy();
|
||||
});
|
||||
|
|
@ -77,26 +79,26 @@ describe('isValidFeatureMetricsReport', () => {
|
|||
});
|
||||
it('returns false for incorrect report field types', () => {
|
||||
const invalidReport = {
|
||||
serverId: 1234, // Should be a string
|
||||
serverId: 1234, // Should be a string
|
||||
serverVersion: '0.0.0',
|
||||
timestampUtcMs: 123456,
|
||||
dataLimit: {enabled: true}
|
||||
dataLimit: {enabled: true},
|
||||
};
|
||||
expect(isValidFeatureMetricsReport(invalidReport)).toBeFalsy();
|
||||
|
||||
const invalidReport2 = {
|
||||
serverId: 'id',
|
||||
serverVersion: 1010, // Should be a string
|
||||
serverVersion: 1010, // Should be a string
|
||||
timestampUtcMs: 123456,
|
||||
dataLimit: {enabled: true}
|
||||
dataLimit: {enabled: true},
|
||||
};
|
||||
expect(isValidFeatureMetricsReport(invalidReport2)).toBeFalsy();
|
||||
|
||||
const invalidReport3 = {
|
||||
serverId: 'id',
|
||||
serverVersion: '0.0.0',
|
||||
timestampUtcMs: '123', // Should be a number
|
||||
dataLimit: {enabled: true}
|
||||
timestampUtcMs: '123', // Should be a number
|
||||
dataLimit: {enabled: true},
|
||||
};
|
||||
expect(isValidFeatureMetricsReport(invalidReport3)).toBeFalsy();
|
||||
|
||||
|
|
@ -104,7 +106,7 @@ describe('isValidFeatureMetricsReport', () => {
|
|||
serverId: 'id',
|
||||
serverVersion: '0.0.0',
|
||||
timestampUtcMs: 123456,
|
||||
dataLimit: 'enabled' // Should be `DailyDataLimitMetricsReport`
|
||||
dataLimit: 'enabled', // Should be `DailyDataLimitMetricsReport`
|
||||
};
|
||||
expect(isValidFeatureMetricsReport(invalidReport4)).toBeFalsy();
|
||||
|
||||
|
|
@ -113,8 +115,8 @@ describe('isValidFeatureMetricsReport', () => {
|
|||
serverVersion: '0.0.0',
|
||||
timestampUtcMs: 123456,
|
||||
dataLimit: {
|
||||
enabled: 'true' // Should be a boolean
|
||||
}
|
||||
enabled: 'true', // Should be a boolean
|
||||
},
|
||||
};
|
||||
expect(isValidFeatureMetricsReport(invalidReport5)).toBeFalsy();
|
||||
});
|
||||
|
|
@ -123,7 +125,7 @@ describe('isValidFeatureMetricsReport', () => {
|
|||
// Missing `serverId`
|
||||
serverVersion: '0.0.0',
|
||||
timestampUtcMs: 123456,
|
||||
dataLimit: {enabled: true}
|
||||
dataLimit: {enabled: true},
|
||||
};
|
||||
expect(isValidFeatureMetricsReport(invalidReport)).toBeFalsy();
|
||||
|
||||
|
|
@ -131,7 +133,7 @@ describe('isValidFeatureMetricsReport', () => {
|
|||
// Missing `serverVersion`
|
||||
serverId: 'id',
|
||||
timestampUtcMs: 123456,
|
||||
dataLimit: {enabled: true}
|
||||
dataLimit: {enabled: true},
|
||||
};
|
||||
expect(isValidFeatureMetricsReport(invalidReport2)).toBeFalsy();
|
||||
|
||||
|
|
@ -139,7 +141,7 @@ describe('isValidFeatureMetricsReport', () => {
|
|||
// Missing `timestampUtcMs`
|
||||
serverId: 'id',
|
||||
serverVersion: '0.0.0',
|
||||
dataLimit: {enabled: true}
|
||||
dataLimit: {enabled: true},
|
||||
};
|
||||
expect(isValidFeatureMetricsReport(invalidReport3)).toBeFalsy();
|
||||
|
||||
|
|
@ -156,7 +158,7 @@ describe('isValidFeatureMetricsReport', () => {
|
|||
serverId: 'id',
|
||||
serverVersion: '0.0.0',
|
||||
timestampUtcMs: 123456,
|
||||
dataLimit: {}
|
||||
dataLimit: {},
|
||||
};
|
||||
expect(isValidFeatureMetricsReport(invalidReport5)).toBeFalsy();
|
||||
});
|
||||
|
|
|
|||
|
|
@ -21,25 +21,27 @@ import {DailyDataLimitMetricsReport, DailyFeatureMetricsReport} from './model';
|
|||
export interface FeatureRow {
|
||||
serverId: string;
|
||||
serverVersion: string;
|
||||
timestamp: string; // ISO formatted string
|
||||
timestamp: string; // ISO formatted string
|
||||
dataLimit: DailyDataLimitMetricsReport;
|
||||
}
|
||||
|
||||
export class BigQueryFeaturesTable implements InsertableTable<FeatureRow> {
|
||||
constructor(private bigqueryTable: Table) {}
|
||||
|
||||
async insert(rows: FeatureRow|FeatureRow[]): Promise<void> {
|
||||
async insert(rows: FeatureRow | FeatureRow[]): Promise<void> {
|
||||
await this.bigqueryTable.insert(rows);
|
||||
}
|
||||
}
|
||||
|
||||
export async function postFeatureMetrics(
|
||||
table: InsertableTable<FeatureRow>, report: DailyFeatureMetricsReport) {
|
||||
table: InsertableTable<FeatureRow>,
|
||||
report: DailyFeatureMetricsReport
|
||||
) {
|
||||
const featureRow: FeatureRow = {
|
||||
serverId: report.serverId,
|
||||
serverVersion: report.serverVersion,
|
||||
timestamp: new Date(report.timestampUtcMs).toISOString(),
|
||||
dataLimit: report.dataLimit
|
||||
dataLimit: report.dataLimit,
|
||||
};
|
||||
return table.insert([featureRow]);
|
||||
}
|
||||
|
|
@ -52,8 +54,12 @@ export function isValidFeatureMetricsReport(obj: any): obj is DailyFeatureMetric
|
|||
}
|
||||
|
||||
// Check that all required fields are present.
|
||||
const requiredFeatureMetricsReportFields =
|
||||
['serverId', 'serverVersion', 'timestampUtcMs', 'dataLimit'];
|
||||
const requiredFeatureMetricsReportFields = [
|
||||
'serverId',
|
||||
'serverVersion',
|
||||
'timestampUtcMs',
|
||||
'dataLimit',
|
||||
];
|
||||
for (const fieldName of requiredFeatureMetricsReportFields) {
|
||||
if (!obj[fieldName]) {
|
||||
return false;
|
||||
|
|
@ -61,8 +67,11 @@ export function isValidFeatureMetricsReport(obj: any): obj is DailyFeatureMetric
|
|||
}
|
||||
|
||||
// Validate the report types are what we expect.
|
||||
if (typeof obj.serverId !== 'string' || typeof obj.serverVersion !== 'string' ||
|
||||
typeof obj.timestampUtcMs !== 'number') {
|
||||
if (
|
||||
typeof obj.serverId !== 'string' ||
|
||||
typeof obj.serverVersion !== 'string' ||
|
||||
typeof obj.timestampUtcMs !== 'number'
|
||||
) {
|
||||
return false;
|
||||
}
|
||||
|
||||
|
|
@ -73,7 +82,7 @@ export function isValidFeatureMetricsReport(obj: any): obj is DailyFeatureMetric
|
|||
|
||||
// Validate the per-key data limit feature
|
||||
const perKeyLimitCount = obj.dataLimit.perKeyLimitCount;
|
||||
if(perKeyLimitCount === undefined) {
|
||||
if (perKeyLimitCount === undefined) {
|
||||
return true;
|
||||
}
|
||||
if (typeof perKeyLimitCount === 'number') {
|
||||
|
|
|
|||
|
|
@ -36,9 +36,11 @@ const config = loadConfig();
|
|||
|
||||
const bigqueryDataset = new BigQuery({projectId: 'uproxysite'}).dataset(config.datasetName);
|
||||
const connectionsTable = new connections.BigQueryConnectionsTable(
|
||||
bigqueryDataset.table(config.connectionMetricsTableName));
|
||||
const featuresTable =
|
||||
new features.BigQueryFeaturesTable(bigqueryDataset.table(config.featureMetricsTableName));
|
||||
bigqueryDataset.table(config.connectionMetricsTableName)
|
||||
);
|
||||
const featuresTable = new features.BigQueryFeaturesTable(
|
||||
bigqueryDataset.table(config.featureMetricsTableName)
|
||||
);
|
||||
|
||||
const app = express();
|
||||
// Parse the request body for content-type 'application/json'.
|
||||
|
|
|
|||
|
|
@ -6,7 +6,5 @@
|
|||
"module": "commonjs",
|
||||
"outDir": "../../build/metrics_server"
|
||||
},
|
||||
"include": [
|
||||
"**/*.ts"
|
||||
]
|
||||
"include": ["**/*.ts"]
|
||||
}
|
||||
|
|
|
|||
|
|
@ -4,8 +4,8 @@ The Outline Sentry webhook is a [Google Cloud Function](https://cloud.google.com
|
|||
|
||||
## Requirements
|
||||
|
||||
* [Google Cloud SDK](https://cloud.google.com/sdk/)
|
||||
* Access to Outline's Sentry account.
|
||||
- [Google Cloud SDK](https://cloud.google.com/sdk/)
|
||||
- Access to Outline's Sentry account.
|
||||
|
||||
## Build
|
||||
|
||||
|
|
@ -16,20 +16,23 @@ npm run action sentry_webhook/build
|
|||
## Deploy
|
||||
|
||||
Authenticate with `gcloud`:
|
||||
```sh
|
||||
gcloud auth login
|
||||
```
|
||||
|
||||
```sh
|
||||
gcloud auth login
|
||||
```
|
||||
|
||||
To deploy:
|
||||
```sh
|
||||
npm run action sentry_webhook/deploy
|
||||
```
|
||||
|
||||
```sh
|
||||
npm run action sentry_webhook/deploy
|
||||
```
|
||||
|
||||
## Configure Sentry Webhooks
|
||||
|
||||
* Log in to Outline's [Sentry account](https://sentry.io/outlinevpn/)
|
||||
* Select a project (outline-client, outline-client-dev, outline-server, outline-server-dev).
|
||||
* Note that this process must be repeated for all Sentry projects.
|
||||
* Enable the WebHooks plugin at `https://sentry.io/settings/outlinevpn/<project>/plugins/`
|
||||
* Set the webhook endpoint at `https://sentry.io/settings/outlinevpn/<project>/plugins/webhooks/`
|
||||
* Configure alerts to invoke the webhook at `https://sentry.io/settings/outlinevpn/<project>/alerts/`
|
||||
* Create rules to trigger the webhook at `https://sentry.io/settings/outlinevpn/<project>/alerts/rules/`
|
||||
- Log in to Outline's [Sentry account](https://sentry.io/outlinevpn/)
|
||||
- Select a project (outline-client, outline-client-dev, outline-server, outline-server-dev).
|
||||
- Note that this process must be repeated for all Sentry projects.
|
||||
- Enable the WebHooks plugin at `https://sentry.io/settings/outlinevpn/<project>/plugins/`
|
||||
- Set the webhook endpoint at `https://sentry.io/settings/outlinevpn/<project>/plugins/webhooks/`
|
||||
- Configure alerts to invoke the webhook at `https://sentry.io/settings/outlinevpn/<project>/alerts/`
|
||||
- Create rules to trigger the webhook at `https://sentry.io/settings/outlinevpn/<project>/alerts/rules/`
|
||||
|
|
|
|||
|
|
@ -15,7 +15,10 @@
|
|||
import * as sentry from '@sentry/types';
|
||||
import * as express from 'express';
|
||||
|
||||
import {postSentryEventToSalesforce, shouldPostEventToSalesforce} from './post_sentry_event_to_salesforce';
|
||||
import {
|
||||
postSentryEventToSalesforce,
|
||||
shouldPostEventToSalesforce,
|
||||
} from './post_sentry_event_to_salesforce';
|
||||
|
||||
exports.postSentryEventToSalesforce = (req: express.Request, res: express.Response<string>) => {
|
||||
if (req.method !== 'POST') {
|
||||
|
|
@ -35,13 +38,13 @@ exports.postSentryEventToSalesforce = (req: express.Request, res: express.Respon
|
|||
// Use the request message if SentryEvent.message is unpopulated.
|
||||
sentryEvent.message = sentryEvent.message || req.body.message;
|
||||
postSentryEventToSalesforce(sentryEvent, req.body.project)
|
||||
.then(() => {
|
||||
res.status(200).send();
|
||||
})
|
||||
.catch((e) => {
|
||||
console.error(e);
|
||||
// Send an OK response to Sentry - they don't need to know about errors with posting to
|
||||
// Salesforce.
|
||||
res.status(200).send();
|
||||
});
|
||||
.then(() => {
|
||||
res.status(200).send();
|
||||
})
|
||||
.catch((e) => {
|
||||
console.error(e);
|
||||
// Send an OK response to Sentry - they don't need to know about errors with posting to
|
||||
// Salesforce.
|
||||
res.status(200).send();
|
||||
});
|
||||
};
|
||||
|
|
|
|||
|
|
@ -48,7 +48,7 @@ const SALESFORCE_FORM_FIELDS_DEV: SalesforceFormFields = {
|
|||
sentryEventUrl: '00N3F000002Rqhq',
|
||||
os: '00N3F000002cLcN',
|
||||
version: '00N3F000002cLcI',
|
||||
type: 'type'
|
||||
type: 'type',
|
||||
};
|
||||
const SALESFORCE_FORM_FIELDS_PROD: SalesforceFormFields = {
|
||||
orgId: 'orgid',
|
||||
|
|
@ -60,7 +60,7 @@ const SALESFORCE_FORM_FIELDS_PROD: SalesforceFormFields = {
|
|||
sentryEventUrl: '00N0b00000BqOA4',
|
||||
os: '00N0b00000BqOfW',
|
||||
version: '00N0b00000BqOfR',
|
||||
type: 'type'
|
||||
type: 'type',
|
||||
};
|
||||
const SALESFORCE_FORM_VALUES_DEV: SalesforceFormValues = {
|
||||
orgId: '00D3F000000DDDH',
|
||||
|
|
@ -80,7 +80,9 @@ export function shouldPostEventToSalesforce(event: sentry.SentryEvent) {
|
|||
// Posts a Sentry event to Salesforce using predefined form data. Assumes
|
||||
// `shouldPostEventToSalesforce` has returned true for `event`.
|
||||
export function postSentryEventToSalesforce(
|
||||
event: sentry.SentryEvent, project: string): Promise<void> {
|
||||
event: sentry.SentryEvent,
|
||||
project: string
|
||||
): Promise<void> {
|
||||
return new Promise((resolve, reject) => {
|
||||
// Sentry development projects are marked with 'dev', i.e. outline-client-dev.
|
||||
const isProd = project.indexOf('-dev') === -1;
|
||||
|
|
@ -88,26 +90,33 @@ export function postSentryEventToSalesforce(
|
|||
const formFields = isProd ? SALESFORCE_FORM_FIELDS_PROD : SALESFORCE_FORM_FIELDS_DEV;
|
||||
const formValues = isProd ? SALESFORCE_FORM_VALUES_PROD : SALESFORCE_FORM_VALUES_DEV;
|
||||
const isClient = project.indexOf('client') !== -1;
|
||||
const formData =
|
||||
getSalesforceFormData(formFields, formValues, event, event.user!.email!, isClient, project);
|
||||
const formData = getSalesforceFormData(
|
||||
formFields,
|
||||
formValues,
|
||||
event,
|
||||
event.user!.email!,
|
||||
isClient,
|
||||
project
|
||||
);
|
||||
const req = https.request(
|
||||
{
|
||||
host: salesforceHost,
|
||||
path: SALESFORCE_PATH,
|
||||
protocol: 'https:',
|
||||
method: 'post',
|
||||
headers: {
|
||||
// The production server will reject requests that do not specify this content type.
|
||||
'Content-Type': 'application/x-www-form-urlencoded'
|
||||
}
|
||||
{
|
||||
host: salesforceHost,
|
||||
path: SALESFORCE_PATH,
|
||||
protocol: 'https:',
|
||||
method: 'post',
|
||||
headers: {
|
||||
// The production server will reject requests that do not specify this content type.
|
||||
'Content-Type': 'application/x-www-form-urlencoded',
|
||||
},
|
||||
(res) => {
|
||||
if (res.statusCode === 200) {
|
||||
resolve();
|
||||
} else {
|
||||
reject(new Error(`Failed to post form data, response status: ${res.statusCode}`));
|
||||
}
|
||||
});
|
||||
},
|
||||
(res) => {
|
||||
if (res.statusCode === 200) {
|
||||
resolve();
|
||||
} else {
|
||||
reject(new Error(`Failed to post form data, response status: ${res.statusCode}`));
|
||||
}
|
||||
}
|
||||
);
|
||||
req.on('error', (err) => {
|
||||
reject(new Error(`Failed to submit form: ${err}`));
|
||||
});
|
||||
|
|
@ -118,8 +127,13 @@ export function postSentryEventToSalesforce(
|
|||
|
||||
// Returns a URL-encoded string with the Salesforce form data.
|
||||
function getSalesforceFormData(
|
||||
formFields: SalesforceFormFields, formValues: SalesforceFormValues, event: sentry.SentryEvent,
|
||||
email: string, isClient: boolean, project: string): string {
|
||||
formFields: SalesforceFormFields,
|
||||
formValues: SalesforceFormValues,
|
||||
event: sentry.SentryEvent,
|
||||
email: string,
|
||||
isClient: boolean,
|
||||
project: string
|
||||
): string {
|
||||
const form = [];
|
||||
form.push(encodeFormData(formFields.orgId, formValues.orgId));
|
||||
form.push(encodeFormData(formFields.recordType, formValues.recordType));
|
||||
|
|
|
|||
|
|
@ -6,7 +6,5 @@
|
|||
"module": "commonjs",
|
||||
"outDir": "../../build/sentry_webhook"
|
||||
},
|
||||
"include": [
|
||||
"**/*.ts"
|
||||
]
|
||||
"include": ["**/*.ts"]
|
||||
}
|
||||
|
|
|
|||
|
|
@ -10,11 +10,13 @@ client apps. Shadowbox is also compatible with standard Shadowsocks clients.
|
|||
## Self-hosted installation
|
||||
|
||||
To install and run Shadowbox on your own server, run
|
||||
|
||||
```
|
||||
sudo bash -c "$(wget -qO- https://raw.githubusercontent.com/Jigsaw-Code/outline-server/master/src/server_manager/install_scripts/install_server.sh)"
|
||||
```
|
||||
|
||||
You can specify flags to customize the installation. For example, to use hostname `myserver.com` and the port 443 for access keys, you can run:
|
||||
|
||||
```
|
||||
sudo bash -c "$(wget -qO- https://raw.githubusercontent.com/Jigsaw-Code/outline-server/master/src/server_manager/install_scripts/install_server.sh)" install_server.sh --hostname=myserver.com --keys-port=443
|
||||
```
|
||||
|
|
@ -35,9 +37,11 @@ Besides [Node](https://nodejs.org/en/download/) you will also need:
|
|||
### Running Shadowbox as a Node.js app
|
||||
|
||||
Build and run the server as a Node.js app:
|
||||
|
||||
```
|
||||
npm run action shadowbox/server/start
|
||||
```
|
||||
|
||||
The output will be at `build/shadowbox/app`.
|
||||
|
||||
### Running Shadowbox as a Docker container
|
||||
|
|
@ -45,36 +49,41 @@ The output will be at `build/shadowbox/app`.
|
|||
### With docker command
|
||||
|
||||
Build the image and run server:
|
||||
|
||||
```
|
||||
npm run action shadowbox/docker/start
|
||||
```
|
||||
|
||||
You should be able to successfully query the management API:
|
||||
|
||||
```
|
||||
curl --insecure https://[::]:8081/TestApiPrefix/server
|
||||
```
|
||||
|
||||
To build the image only:
|
||||
|
||||
```
|
||||
npm run action shadowbox/docker/build
|
||||
```
|
||||
|
||||
Debug image:
|
||||
|
||||
```
|
||||
docker run --rm -it --entrypoint=sh outline/shadowbox
|
||||
```
|
||||
|
||||
Or a running container:
|
||||
|
||||
```
|
||||
docker exec -it shadowbox sh
|
||||
```
|
||||
|
||||
Delete dangling images:
|
||||
|
||||
```
|
||||
docker rmi $(docker images -f dangling=true -q)
|
||||
```
|
||||
|
||||
|
||||
## Access Keys Management API
|
||||
|
||||
In order to utilize the Management API, you'll need to know the apiUrl for your Outline server.
|
||||
|
|
@ -87,6 +96,7 @@ The OpenAPI specification can be found at [api.yml](./server/api.yml).
|
|||
### Examples
|
||||
|
||||
Start by storing the apiURL you see see in that file, as a variable. For example:
|
||||
|
||||
```
|
||||
API_URL=https://1.2.3.4:1234/3pQ4jf6qSr5WVeMO0XOo4z
|
||||
```
|
||||
|
|
@ -94,34 +104,40 @@ API_URL=https://1.2.3.4:1234/3pQ4jf6qSr5WVeMO0XOo4z
|
|||
You can then perform the following operations on the server, remotely.
|
||||
|
||||
List access keys
|
||||
|
||||
```
|
||||
curl --insecure $API_URL/access-keys/
|
||||
```
|
||||
|
||||
Create an access key
|
||||
|
||||
```
|
||||
curl --insecure -X POST $API_URL/access-keys
|
||||
```
|
||||
|
||||
Rename an access key
|
||||
(e.g. rename access key 2 to 'albion')
|
||||
|
||||
```
|
||||
curl --insecure -X PUT curl -F 'name=albion' $API_URL/access-keys/2/name
|
||||
```
|
||||
|
||||
Remove an access key
|
||||
(e.g. remove access key 2)
|
||||
|
||||
```
|
||||
curl --insecure -X DELETE $API_URL/access-keys/2
|
||||
```
|
||||
|
||||
Set a data limit for all access keys
|
||||
(e.g. limit outbound data transfer access keys to 1MB over 30 days)
|
||||
|
||||
```
|
||||
curl -v --insecure -X PUT -H "Content-Type: application/json" -d '{"limit": {"bytes": 1000}}' $API_URL/experimental/access-key-data-limit
|
||||
```
|
||||
|
||||
Remove the access key data limit
|
||||
|
||||
```
|
||||
curl -v --insecure -X DELETE $API_URL/experimental/access-key-data-limit
|
||||
```
|
||||
|
|
@ -142,11 +158,13 @@ modified image.
|
|||
### Automated
|
||||
|
||||
To run the integration test:
|
||||
|
||||
```
|
||||
npm run action shadowbox/integration_test/start
|
||||
```
|
||||
|
||||
This will set up three containers and two networks:
|
||||
|
||||
```
|
||||
client <-> shadowbox <-> target
|
||||
```
|
||||
|
|
@ -155,11 +173,13 @@ client <-> shadowbox <-> target
|
|||
|
||||
To test clients that rely on fetching a docker image from Dockerhub, you can push an image to your account and modify the
|
||||
client to use your image. To push your own image:
|
||||
|
||||
```
|
||||
npm run action shadowbox/docker/build && docker tag quay.io/outline/shadowbox $USER/shadowbox && docker push $USER/shadowbox
|
||||
```
|
||||
|
||||
If you need to test an unsigned image (e.g. your dev one):
|
||||
|
||||
```
|
||||
DOCKER_CONTENT_TRUST=0 SB_IMAGE=$USER/shadowbox npm run action shadowbox/integration_test/start
|
||||
```
|
||||
|
|
@ -175,4 +195,4 @@ start-up time, then you mey need to remove the pre-existing test config:
|
|||
rm /tmp/outline/persisted-state/shadowbox_server_config.json
|
||||
```
|
||||
|
||||
This will warn about deleting a write-protected file, which is okay to ignore. You will then need to hand-edit the JSON string in src/shadowbox/docker/start.action.sh.
|
||||
This will warn about deleting a write-protected file, which is okay to ignore. You will then need to hand-edit the JSON string in src/shadowbox/docker/start.action.sh.
|
||||
|
|
|
|||
|
|
@ -49,4 +49,4 @@ export class ManualClock implements Clock {
|
|||
await callback();
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ describe('file', () => {
|
|||
describe('readFileIfExists', () => {
|
||||
let tmpFile: tmp.FileResult;
|
||||
|
||||
beforeEach(() => tmpFile = tmp.fileSync());
|
||||
beforeEach(() => (tmpFile = tmp.fileSync()));
|
||||
|
||||
it('reads the file if it exists', () => {
|
||||
const contents = 'test';
|
||||
|
|
@ -24,14 +24,14 @@ describe('file', () => {
|
|||
expect(file.readFileIfExists(tmpFile.name)).toBe('');
|
||||
});
|
||||
|
||||
it('returns null if file doesn\'t exist',
|
||||
() => expect(file.readFileIfExists(tmp.tmpNameSync())).toBe(null));
|
||||
it("returns null if file doesn't exist", () =>
|
||||
expect(file.readFileIfExists(tmp.tmpNameSync())).toBe(null));
|
||||
});
|
||||
|
||||
describe('atomicWriteFileSync', () => {
|
||||
let tmpFile: tmp.FileResult;
|
||||
|
||||
beforeEach(() => tmpFile = tmp.fileSync());
|
||||
beforeEach(() => (tmpFile = tmp.fileSync()));
|
||||
|
||||
it('writes to the file', () => {
|
||||
const contents = 'test';
|
||||
|
|
@ -44,20 +44,24 @@ describe('file', () => {
|
|||
it('supports multiple simultaneous writes to the same file', async () => {
|
||||
const writeCount = 100;
|
||||
|
||||
const writer = (_, id) => new Promise<void>((resolve, reject) => {
|
||||
try {
|
||||
file.atomicWriteFileSync(
|
||||
tmpFile.name, `${fs.readFileSync(tmpFile.name, {encoding: 'utf-8'})}${id}\n`);
|
||||
resolve();
|
||||
} catch (e) {
|
||||
reject(e);
|
||||
}
|
||||
});
|
||||
const writer = (_, id) =>
|
||||
new Promise<void>((resolve, reject) => {
|
||||
try {
|
||||
file.atomicWriteFileSync(
|
||||
tmpFile.name,
|
||||
`${fs.readFileSync(tmpFile.name, {encoding: 'utf-8'})}${id}\n`
|
||||
);
|
||||
resolve();
|
||||
} catch (e) {
|
||||
reject(e);
|
||||
}
|
||||
});
|
||||
|
||||
await Promise.all(Array.from({length: writeCount}, writer));
|
||||
|
||||
expect(fs.readFileSync(tmpFile.name, {encoding: 'utf8'}).trimEnd().split('\n').length)
|
||||
.toBe(writeCount);
|
||||
expect(fs.readFileSync(tmpFile.name, {encoding: 'utf8'}).trimEnd().split('\n').length).toBe(
|
||||
writeCount
|
||||
);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
|
|
|||
|
|
@ -20,7 +20,9 @@ import fetch, {RequestInit, Response} from 'node-fetch';
|
|||
// method to GET and removing the request body. The options parameter matches the
|
||||
// fetch() function.
|
||||
export async function requestFollowRedirectsWithSameMethodAndBody(
|
||||
url: string, options: RequestInit): Promise<Response> {
|
||||
url: string,
|
||||
options: RequestInit
|
||||
): Promise<Response> {
|
||||
// Make a copy of options to modify parameters.
|
||||
const manualRedirectOptions = {
|
||||
...options,
|
||||
|
|
|
|||
|
|
@ -31,7 +31,6 @@ export function loadFileConfig<T>(filename: string): JsonConfig<T> {
|
|||
return new FileConfig<T>(filename, dataJson);
|
||||
}
|
||||
|
||||
|
||||
// FileConfig is a JsonConfig backed by a filesystem file.
|
||||
export class FileConfig<T> implements JsonConfig<T> {
|
||||
constructor(private filename: string, private dataJson: T) {}
|
||||
|
|
|
|||
|
|
@ -35,7 +35,7 @@ function getCallsite(): Callsite {
|
|||
}
|
||||
|
||||
// Possible values for the level prefix.
|
||||
type LevelPrefix = 'E'|'W'|'I'|'D';
|
||||
type LevelPrefix = 'E' | 'W' | 'I' | 'D';
|
||||
|
||||
// Formats the log message. Example:
|
||||
// I2018-08-16T16:46:21.577Z 167288 main.js:86] ...
|
||||
|
|
@ -46,8 +46,9 @@ function makeLogMessage(level: LevelPrefix, callsite: Callsite, message: string)
|
|||
const timeStr = new Date().toISOString();
|
||||
// TODO(alalama): preserve the source file structure in the webpack build so we can use
|
||||
// `callsite.getFileName()`.
|
||||
return `${level}${timeStr} ${process.pid} ${
|
||||
path.basename(callsite.getFileName() || __filename)}:${callsite.getLineNumber()}] ${message}`;
|
||||
return `${level}${timeStr} ${process.pid} ${path.basename(
|
||||
callsite.getFileName() || __filename
|
||||
)}:${callsite.getLineNumber()}] ${message}`;
|
||||
}
|
||||
|
||||
export function error(message: string) {
|
||||
|
|
|
|||
|
|
@ -12,7 +12,6 @@
|
|||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
|
||||
import * as child_process from 'child_process';
|
||||
import * as fs from 'fs';
|
||||
import * as http from 'http';
|
||||
|
|
@ -23,17 +22,16 @@ import * as path from 'path';
|
|||
import * as logging from '../infrastructure/logging';
|
||||
|
||||
export interface QueryResultData {
|
||||
resultType: 'matrix'|'vector'|'scalar'|'string';
|
||||
result: Array < {
|
||||
resultType: 'matrix' | 'vector' | 'scalar' | 'string';
|
||||
result: Array<{
|
||||
metric: {[labelValue: string]: string};
|
||||
value: [number, string];
|
||||
}
|
||||
> ;
|
||||
}>;
|
||||
}
|
||||
|
||||
// From https://prometheus.io/docs/prometheus/latest/querying/api/
|
||||
interface QueryResult {
|
||||
status: 'success'|'error';
|
||||
status: 'success' | 'error';
|
||||
data: QueryResultData;
|
||||
errorType: string;
|
||||
error: string;
|
||||
|
|
@ -45,40 +43,46 @@ export class PrometheusClient {
|
|||
query(query: string): Promise<QueryResultData> {
|
||||
return new Promise<QueryResultData>((fulfill, reject) => {
|
||||
const url = `${this.address}/api/v1/query?query=${encodeURIComponent(query)}`;
|
||||
http.get(url, (response) => {
|
||||
if (response.statusCode < 200 || response.statusCode > 299) {
|
||||
reject(new Error(`Got error ${response.statusCode}`));
|
||||
response.resume();
|
||||
return;
|
||||
http
|
||||
.get(url, (response) => {
|
||||
if (response.statusCode < 200 || response.statusCode > 299) {
|
||||
reject(new Error(`Got error ${response.statusCode}`));
|
||||
response.resume();
|
||||
return;
|
||||
}
|
||||
let body = '';
|
||||
response.on('data', (data) => {
|
||||
body += data;
|
||||
});
|
||||
response.on('end', () => {
|
||||
const result = JSON.parse(body) as QueryResult;
|
||||
if (result.status !== 'success') {
|
||||
return reject(new Error(`Error ${result.errorType}: ${result.error}`));
|
||||
}
|
||||
let body = '';
|
||||
response.on('data', (data) => {
|
||||
body += data;
|
||||
});
|
||||
response.on('end', () => {
|
||||
const result = JSON.parse(body) as QueryResult;
|
||||
if (result.status !== 'success') {
|
||||
return reject(new Error(`Error ${result.errorType}: ${result.error}`));
|
||||
}
|
||||
fulfill(result.data);
|
||||
});
|
||||
}).on('error', (e) => {
|
||||
reject(new Error(`Failed to query prometheus API: ${e}`));
|
||||
});
|
||||
fulfill(result.data);
|
||||
});
|
||||
})
|
||||
.on('error', (e) => {
|
||||
reject(new Error(`Failed to query prometheus API: ${e}`));
|
||||
});
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
export async function startPrometheus(
|
||||
binaryFilename: string, configFilename: string, configJson: {}, processArgs: string[],
|
||||
endpoint: string) {
|
||||
binaryFilename: string,
|
||||
configFilename: string,
|
||||
configJson: {},
|
||||
processArgs: string[],
|
||||
endpoint: string
|
||||
) {
|
||||
await writePrometheusConfigToDisk(configFilename, configJson);
|
||||
await spawnPrometheusSubprocess(binaryFilename, processArgs, endpoint);
|
||||
}
|
||||
|
||||
async function writePrometheusConfigToDisk(configFilename: string, configJson: {}) {
|
||||
await mkdirp.sync(path.dirname(configFilename));
|
||||
const ymlTxt = jsyaml.safeDump(configJson, {'sortKeys': true});
|
||||
const ymlTxt = jsyaml.safeDump(configJson, {sortKeys: true});
|
||||
// Write the file asynchronously to prevent blocking the node thread.
|
||||
await new Promise<void>((resolve, reject) => {
|
||||
fs.writeFile(configFilename, ymlTxt, 'utf-8', (err) => {
|
||||
|
|
@ -92,8 +96,10 @@ async function writePrometheusConfigToDisk(configFilename: string, configJson: {
|
|||
}
|
||||
|
||||
async function spawnPrometheusSubprocess(
|
||||
binaryFilename: string, processArgs: string[],
|
||||
prometheusEndpoint: string): Promise<child_process.ChildProcess> {
|
||||
binaryFilename: string,
|
||||
processArgs: string[],
|
||||
prometheusEndpoint: string
|
||||
): Promise<child_process.ChildProcess> {
|
||||
logging.info(`Starting Prometheus with args [${processArgs}]`);
|
||||
const runProcess = child_process.spawn(binaryFilename, processArgs);
|
||||
runProcess.on('error', (error) => {
|
||||
|
|
@ -120,11 +126,13 @@ async function waitForPrometheusReady(prometheusEndpoint: string) {
|
|||
|
||||
function isHttpEndpointHealthy(endpoint: string): Promise<boolean> {
|
||||
return new Promise((resolve, reject) => {
|
||||
http.get(endpoint, (response) => {
|
||||
resolve(response.statusCode >= 200 && response.statusCode < 300);
|
||||
}).on('error', (e) => {
|
||||
// Prometheus is not ready yet.
|
||||
resolve(false);
|
||||
});
|
||||
http
|
||||
.get(endpoint, (response) => {
|
||||
resolve(response.statusCode >= 200 && response.statusCode < 300);
|
||||
})
|
||||
.on('error', (e) => {
|
||||
// Prometheus is not ready yet.
|
||||
resolve(false);
|
||||
});
|
||||
});
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1 +1,3 @@
|
|||
<html>TARGET PAGE CONTENT</html>
|
||||
<html>
|
||||
TARGET PAGE CONTENT
|
||||
</html>
|
||||
|
|
|
|||
|
|
@ -62,7 +62,7 @@ export interface AccessKeyRepository {
|
|||
// Apply the specified update to the specified access key. Throws on failure.
|
||||
renameAccessKey(id: AccessKeyId, name: string): void;
|
||||
// Gets the metrics id for a given Access Key.
|
||||
getMetricsId(id: AccessKeyId): AccessKeyMetricsId|undefined;
|
||||
getMetricsId(id: AccessKeyId): AccessKeyMetricsId | undefined;
|
||||
// Sets a data transfer limit for all access keys.
|
||||
setDefaultDataLimit(limit: DataLimit): void;
|
||||
// Removes the access key data transfer limit.
|
||||
|
|
|
|||
|
|
@ -25,4 +25,6 @@ export interface DataUsageByUser {
|
|||
}
|
||||
|
||||
// Sliding time frame for measuring data utilization.
|
||||
export interface DataUsageTimeframe { hours: number; }
|
||||
export interface DataUsageTimeframe {
|
||||
hours: number;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -33,7 +33,12 @@ import {bindService, ShadowsocksManagerService} from './manager_service';
|
|||
import {OutlineShadowsocksServer} from './outline_shadowsocks_server';
|
||||
import {AccessKeyConfigJson, ServerAccessKeyRepository} from './server_access_key';
|
||||
import * as server_config from './server_config';
|
||||
import {OutlineSharedMetricsPublisher, PrometheusUsageMetrics, RestMetricsCollectorClient, SharedMetricsPublisher} from './shared_metrics';
|
||||
import {
|
||||
OutlineSharedMetricsPublisher,
|
||||
PrometheusUsageMetrics,
|
||||
RestMetricsCollectorClient,
|
||||
SharedMetricsPublisher,
|
||||
} from './shared_metrics';
|
||||
|
||||
const APP_BASE_DIR = path.join(__dirname, '..');
|
||||
const DEFAULT_STATE_DIR = '/root/shadowbox/persisted-state';
|
||||
|
|
@ -53,14 +58,17 @@ async function exportPrometheusMetrics(registry: prometheus.Registry, port): Pro
|
|||
}
|
||||
|
||||
function reserveExistingAccessKeyPorts(
|
||||
keyConfig: json_config.JsonConfig<AccessKeyConfigJson>, portProvider: PortProvider) {
|
||||
keyConfig: json_config.JsonConfig<AccessKeyConfigJson>,
|
||||
portProvider: PortProvider
|
||||
) {
|
||||
const accessKeys = keyConfig.data().accessKeys || [];
|
||||
const dedupedPorts = new Set(accessKeys.map(ak => ak.port));
|
||||
dedupedPorts.forEach(p => portProvider.addReservedPort(p));
|
||||
const dedupedPorts = new Set(accessKeys.map((ak) => ak.port));
|
||||
dedupedPorts.forEach((p) => portProvider.addReservedPort(p));
|
||||
}
|
||||
|
||||
function createRolloutTracker(serverConfig: json_config.JsonConfig<server_config.ServerConfigJson>):
|
||||
RolloutTracker {
|
||||
function createRolloutTracker(
|
||||
serverConfig: json_config.JsonConfig<server_config.ServerConfigJson>
|
||||
): RolloutTracker {
|
||||
const rollouts = new RolloutTracker(serverConfig.data().serverId);
|
||||
if (serverConfig.data().rollouts) {
|
||||
for (const rollout of serverConfig.data().rollouts) {
|
||||
|
|
@ -74,7 +82,8 @@ async function main() {
|
|||
const verbose = process.env.LOG_LEVEL === 'debug';
|
||||
const portProvider = new PortProvider();
|
||||
const accessKeyConfig = json_config.loadFileConfig<AccessKeyConfigJson>(
|
||||
getPersistentFilename('shadowbox_config.json'));
|
||||
getPersistentFilename('shadowbox_config.json')
|
||||
);
|
||||
reserveExistingAccessKeyPorts(accessKeyConfig, portProvider);
|
||||
|
||||
prometheus.collectDefaultMetrics({register: prometheus.register});
|
||||
|
|
@ -94,8 +103,9 @@ async function main() {
|
|||
}
|
||||
portProvider.addReservedPort(apiPortNumber);
|
||||
|
||||
const serverConfig =
|
||||
server_config.readServerConfig(getPersistentFilename('shadowbox_server_config.json'));
|
||||
const serverConfig = server_config.readServerConfig(
|
||||
getPersistentFilename('shadowbox_server_config.json')
|
||||
);
|
||||
|
||||
const proxyHostname = serverConfig.data().hostname;
|
||||
if (!proxyHostname) {
|
||||
|
|
@ -130,22 +140,29 @@ async function main() {
|
|||
scrape_configs: [
|
||||
{job_name: 'prometheus', static_configs: [{targets: [prometheusLocation]}]},
|
||||
{job_name: 'outline-server-main', static_configs: [{targets: [nodeMetricsLocation]}]},
|
||||
]
|
||||
],
|
||||
};
|
||||
|
||||
const ssMetricsLocation = `127.0.0.1:${ssMetricsPort}`;
|
||||
logging.info(`outline-ss-server metrics is at ${ssMetricsLocation}`);
|
||||
prometheusConfigJson.scrape_configs.push(
|
||||
{job_name: 'outline-server-ss', static_configs: [{targets: [ssMetricsLocation]}]});
|
||||
prometheusConfigJson.scrape_configs.push({
|
||||
job_name: 'outline-server-ss',
|
||||
static_configs: [{targets: [ssMetricsLocation]}],
|
||||
});
|
||||
const shadowsocksServer = new OutlineShadowsocksServer(
|
||||
getBinaryFilename('outline-ss-server'), getPersistentFilename('outline-ss-server/config.yml'),
|
||||
verbose, ssMetricsLocation);
|
||||
getBinaryFilename('outline-ss-server'),
|
||||
getPersistentFilename('outline-ss-server/config.yml'),
|
||||
verbose,
|
||||
ssMetricsLocation
|
||||
);
|
||||
if (fs.existsSync(MMDB_LOCATION)) {
|
||||
shadowsocksServer.enableCountryMetrics(MMDB_LOCATION);
|
||||
}
|
||||
|
||||
const isReplayProtectionEnabled =
|
||||
createRolloutTracker(serverConfig).isRolloutEnabled('replay-protection', 100);
|
||||
const isReplayProtectionEnabled = createRolloutTracker(serverConfig).isRolloutEnabled(
|
||||
'replay-protection',
|
||||
100
|
||||
);
|
||||
logging.info(`Replay protection enabled: ${isReplayProtectionEnabled}`);
|
||||
if (isReplayProtectionEnabled) {
|
||||
shadowsocksServer.enableReplayProtection();
|
||||
|
|
@ -157,13 +174,25 @@ async function main() {
|
|||
const prometheusEndpoint = `http://${prometheusLocation}`;
|
||||
const prometheusBinary = getBinaryFilename('prometheus');
|
||||
const prometheusArgs = [
|
||||
'--config.file', prometheusConfigFilename, '--web.enable-admin-api',
|
||||
'--storage.tsdb.retention.time', '31d', '--storage.tsdb.path', prometheusTsdbFilename,
|
||||
'--web.listen-address', prometheusLocation, '--log.level', verbose ? 'debug' : 'info'
|
||||
'--config.file',
|
||||
prometheusConfigFilename,
|
||||
'--web.enable-admin-api',
|
||||
'--storage.tsdb.retention.time',
|
||||
'31d',
|
||||
'--storage.tsdb.path',
|
||||
prometheusTsdbFilename,
|
||||
'--web.listen-address',
|
||||
prometheusLocation,
|
||||
'--log.level',
|
||||
verbose ? 'debug' : 'info',
|
||||
];
|
||||
await startPrometheus(
|
||||
prometheusBinary, prometheusConfigFilename, prometheusConfigJson, prometheusArgs,
|
||||
prometheusEndpoint);
|
||||
prometheusBinary,
|
||||
prometheusConfigFilename,
|
||||
prometheusConfigJson,
|
||||
prometheusArgs,
|
||||
prometheusEndpoint
|
||||
);
|
||||
|
||||
const prometheusClient = new PrometheusClient(prometheusEndpoint);
|
||||
if (!serverConfig.data().portForNewAccessKeys) {
|
||||
|
|
@ -171,8 +200,13 @@ async function main() {
|
|||
serverConfig.write();
|
||||
}
|
||||
const accessKeyRepository = new ServerAccessKeyRepository(
|
||||
serverConfig.data().portForNewAccessKeys, proxyHostname, accessKeyConfig, shadowsocksServer,
|
||||
prometheusClient, serverConfig.data().accessKeyDataLimit);
|
||||
serverConfig.data().portForNewAccessKeys,
|
||||
proxyHostname,
|
||||
accessKeyConfig,
|
||||
shadowsocksServer,
|
||||
prometheusClient,
|
||||
serverConfig.data().accessKeyDataLimit
|
||||
);
|
||||
|
||||
const metricsReader = new PrometheusUsageMetrics(prometheusClient);
|
||||
const toMetricsId = (id: AccessKeyId) => {
|
||||
|
|
@ -185,21 +219,35 @@ async function main() {
|
|||
const managerMetrics = new PrometheusManagerMetrics(prometheusClient);
|
||||
const metricsCollector = new RestMetricsCollectorClient(metricsCollectorUrl);
|
||||
const metricsPublisher: SharedMetricsPublisher = new OutlineSharedMetricsPublisher(
|
||||
new RealClock(), serverConfig, accessKeyConfig, metricsReader, toMetricsId, metricsCollector);
|
||||
new RealClock(),
|
||||
serverConfig,
|
||||
accessKeyConfig,
|
||||
metricsReader,
|
||||
toMetricsId,
|
||||
metricsCollector
|
||||
);
|
||||
const managerService = new ShadowsocksManagerService(
|
||||
process.env.SB_DEFAULT_SERVER_NAME || 'Outline Server', serverConfig, accessKeyRepository,
|
||||
managerMetrics, metricsPublisher);
|
||||
process.env.SB_DEFAULT_SERVER_NAME || 'Outline Server',
|
||||
serverConfig,
|
||||
accessKeyRepository,
|
||||
managerMetrics,
|
||||
metricsPublisher
|
||||
);
|
||||
|
||||
const certificateFilename = process.env.SB_CERTIFICATE_FILE;
|
||||
const privateKeyFilename = process.env.SB_PRIVATE_KEY_FILE;
|
||||
const apiServer = restify.createServer({
|
||||
certificate: fs.readFileSync(certificateFilename),
|
||||
key: fs.readFileSync(privateKeyFilename)
|
||||
key: fs.readFileSync(privateKeyFilename),
|
||||
});
|
||||
|
||||
// Pre-routing handlers
|
||||
const cors =
|
||||
corsMiddleware({origins: ['*'], allowHeaders: [], exposeHeaders: [], credentials: false});
|
||||
const cors = corsMiddleware({
|
||||
origins: ['*'],
|
||||
allowHeaders: [],
|
||||
exposeHeaders: [],
|
||||
credentials: false,
|
||||
});
|
||||
apiServer.pre(cors.preflight);
|
||||
apiServer.pre(restify.pre.sanitizePath());
|
||||
|
||||
|
|
|
|||
|
|
@ -20,7 +20,8 @@ import {FakePrometheusClient} from './mocks/mocks';
|
|||
describe('PrometheusManagerMetrics', () => {
|
||||
it('getOutboundByteTransfer', async (done) => {
|
||||
const managerMetrics = new PrometheusManagerMetrics(
|
||||
new FakePrometheusClient({'access-key-1': 1000, 'access-key-2': 10000}));
|
||||
new FakePrometheusClient({'access-key-1': 1000, 'access-key-2': 10000})
|
||||
);
|
||||
const dataUsage = await managerMetrics.getOutboundByteTransfer({hours: 0});
|
||||
const bytesTransferredByUserId = dataUsage.bytesTransferredByUserId;
|
||||
expect(Object.keys(bytesTransferredByUserId).length).toEqual(2);
|
||||
|
|
|
|||
|
|
@ -27,9 +27,9 @@ export class PrometheusManagerMetrics implements ManagerMetrics {
|
|||
// TODO(fortuna): Consider pre-computing this to save server's CPU.
|
||||
// We measure only traffic leaving the server, since that's what DigitalOcean charges.
|
||||
// TODO: Display all directions to admin
|
||||
const result =
|
||||
await this.prometheusClient.query(`sum(increase(shadowsocks_data_bytes{dir=~"c<p|p>t"}[${
|
||||
timeframe.hours}h])) by (access_key)`);
|
||||
const result = await this.prometheusClient.query(
|
||||
`sum(increase(shadowsocks_data_bytes{dir=~"c<p|p>t"}[${timeframe.hours}h])) by (access_key)`
|
||||
);
|
||||
const usage = {} as {[userId: string]: number};
|
||||
for (const entry of result.result) {
|
||||
const bytes = Math.round(parseFloat(entry.value[1]));
|
||||
|
|
|
|||
|
|
@ -33,8 +33,15 @@ interface ServerInfo {
|
|||
|
||||
const NEW_PORT = 12345;
|
||||
const OLD_PORT = 54321;
|
||||
const EXPECTED_ACCESS_KEY_PROPERTIES =
|
||||
['id', 'name', 'password', 'port', 'method', 'accessUrl', 'dataLimit'].sort();
|
||||
const EXPECTED_ACCESS_KEY_PROPERTIES = [
|
||||
'id',
|
||||
'name',
|
||||
'password',
|
||||
'port',
|
||||
'method',
|
||||
'accessUrl',
|
||||
'dataLimit',
|
||||
].sort();
|
||||
|
||||
describe('ShadowsocksManagerService', () => {
|
||||
// After processing the response callback, we should set
|
||||
|
|
@ -53,38 +60,44 @@ describe('ShadowsocksManagerService', () => {
|
|||
const repo = getAccessKeyRepository();
|
||||
const serverConfig = new InMemoryConfig({} as ServerConfigJson);
|
||||
const service = new ShadowsocksManagerServiceBuilder()
|
||||
.serverConfig(serverConfig)
|
||||
.accessKeys(repo)
|
||||
.build();
|
||||
.serverConfig(serverConfig)
|
||||
.accessKeys(repo)
|
||||
.build();
|
||||
service.getServer(
|
||||
{params: {}}, {
|
||||
send: (httpCode, data: ServerInfo) => {
|
||||
expect(httpCode).toEqual(200);
|
||||
expect(data.name).toEqual('default name');
|
||||
responseProcessed = true;
|
||||
}
|
||||
{params: {}},
|
||||
{
|
||||
send: (httpCode, data: ServerInfo) => {
|
||||
expect(httpCode).toEqual(200);
|
||||
expect(data.name).toEqual('default name');
|
||||
responseProcessed = true;
|
||||
},
|
||||
done);
|
||||
},
|
||||
done
|
||||
);
|
||||
});
|
||||
it('Returns persisted properties', (done) => {
|
||||
const repo = getAccessKeyRepository();
|
||||
const defaultDataLimit = {bytes: 999};
|
||||
const serverConfig =
|
||||
new InMemoryConfig({name: 'Server', accessKeyDataLimit: defaultDataLimit} as ServerConfigJson);
|
||||
const serverConfig = new InMemoryConfig({
|
||||
name: 'Server',
|
||||
accessKeyDataLimit: defaultDataLimit,
|
||||
} as ServerConfigJson);
|
||||
const service = new ShadowsocksManagerServiceBuilder()
|
||||
.serverConfig(serverConfig)
|
||||
.accessKeys(repo)
|
||||
.build();
|
||||
.serverConfig(serverConfig)
|
||||
.accessKeys(repo)
|
||||
.build();
|
||||
service.getServer(
|
||||
{params: {}}, {
|
||||
send: (httpCode, data: ServerInfo) => {
|
||||
expect(httpCode).toEqual(200);
|
||||
expect(data.name).toEqual('Server');
|
||||
expect(data.accessKeyDataLimit).toEqual(defaultDataLimit);
|
||||
responseProcessed = true;
|
||||
}
|
||||
{params: {}},
|
||||
{
|
||||
send: (httpCode, data: ServerInfo) => {
|
||||
expect(httpCode).toEqual(200);
|
||||
expect(data.name).toEqual('Server');
|
||||
expect(data.accessKeyDataLimit).toEqual(defaultDataLimit);
|
||||
responseProcessed = true;
|
||||
},
|
||||
done);
|
||||
},
|
||||
done
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
|
|
@ -93,18 +106,20 @@ describe('ShadowsocksManagerService', () => {
|
|||
const repo = getAccessKeyRepository();
|
||||
const serverConfig = new InMemoryConfig({} as ServerConfigJson);
|
||||
const service = new ShadowsocksManagerServiceBuilder()
|
||||
.serverConfig(serverConfig)
|
||||
.accessKeys(repo)
|
||||
.build();
|
||||
.serverConfig(serverConfig)
|
||||
.accessKeys(repo)
|
||||
.build();
|
||||
service.renameServer(
|
||||
{params: {name: 'new name'}}, {
|
||||
send: (httpCode, _) => {
|
||||
expect(httpCode).toEqual(204);
|
||||
expect(serverConfig.mostRecentWrite.name).toEqual('new name');
|
||||
responseProcessed = true;
|
||||
}
|
||||
{params: {name: 'new name'}},
|
||||
{
|
||||
send: (httpCode, _) => {
|
||||
expect(httpCode).toEqual(204);
|
||||
expect(serverConfig.mostRecentWrite.name).toEqual('new name');
|
||||
responseProcessed = true;
|
||||
},
|
||||
done);
|
||||
},
|
||||
done
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
|
|
@ -112,19 +127,26 @@ describe('ShadowsocksManagerService', () => {
|
|||
it(`accepts valid hostnames`, (done) => {
|
||||
const serverConfig = new InMemoryConfig({} as ServerConfigJson);
|
||||
const service = new ShadowsocksManagerServiceBuilder()
|
||||
.serverConfig(serverConfig)
|
||||
.accessKeys(getAccessKeyRepository())
|
||||
.build();
|
||||
.serverConfig(serverConfig)
|
||||
.accessKeys(getAccessKeyRepository())
|
||||
.build();
|
||||
|
||||
const res = {
|
||||
send: (httpCode) => {
|
||||
expect(httpCode).toEqual(204);
|
||||
}
|
||||
},
|
||||
};
|
||||
|
||||
const goodHostnames = [
|
||||
'-bad', 'localhost', 'example.com', 'www.example.org', 'www.exa-mple.tw', '123abc.co.uk',
|
||||
'93.184.216.34', '::0', '2606:2800:220:1:248:1893:25c8:1946'
|
||||
'-bad',
|
||||
'localhost',
|
||||
'example.com',
|
||||
'www.example.org',
|
||||
'www.exa-mple.tw',
|
||||
'123abc.co.uk',
|
||||
'93.184.216.34',
|
||||
'::0',
|
||||
'2606:2800:220:1:248:1893:25c8:1946',
|
||||
];
|
||||
for (const hostname of goodHostnames) {
|
||||
service.setHostnameForAccessKeys({params: {hostname}}, res, () => {});
|
||||
|
|
@ -136,19 +158,23 @@ describe('ShadowsocksManagerService', () => {
|
|||
it(`rejects invalid hostnames`, (done) => {
|
||||
const serverConfig = new InMemoryConfig({} as ServerConfigJson);
|
||||
const service = new ShadowsocksManagerServiceBuilder()
|
||||
.serverConfig(serverConfig)
|
||||
.accessKeys(getAccessKeyRepository())
|
||||
.build();
|
||||
.serverConfig(serverConfig)
|
||||
.accessKeys(getAccessKeyRepository())
|
||||
.build();
|
||||
|
||||
const res = {send: (httpCode) => {}};
|
||||
const next = (error) => {
|
||||
expect(error.statusCode).toEqual(400);
|
||||
};
|
||||
|
||||
|
||||
const badHostnames = [
|
||||
null, '', '-abc.com', 'abc-.com', 'abc.com/def', 'i_have_underscores.net',
|
||||
'gggg:ggg:220:1:248:1893:25c8:1946'
|
||||
null,
|
||||
'',
|
||||
'-abc.com',
|
||||
'abc-.com',
|
||||
'abc.com/def',
|
||||
'i_have_underscores.net',
|
||||
'gggg:ggg:220:1:248:1893:25c8:1946',
|
||||
];
|
||||
for (const hostname of badHostnames) {
|
||||
service.setHostnameForAccessKeys({params: {hostname}}, res, next);
|
||||
|
|
@ -157,28 +183,28 @@ describe('ShadowsocksManagerService', () => {
|
|||
responseProcessed = true;
|
||||
done();
|
||||
});
|
||||
it('Changes the server\'s hostname', (done) => {
|
||||
it("Changes the server's hostname", (done) => {
|
||||
const serverConfig = new InMemoryConfig({} as ServerConfigJson);
|
||||
const service = new ShadowsocksManagerServiceBuilder()
|
||||
.serverConfig(serverConfig)
|
||||
.accessKeys(getAccessKeyRepository())
|
||||
.build();
|
||||
.serverConfig(serverConfig)
|
||||
.accessKeys(getAccessKeyRepository())
|
||||
.build();
|
||||
const hostname = 'www.example.org';
|
||||
const res = {
|
||||
send: (httpCode) => {
|
||||
expect(httpCode).toEqual(204);
|
||||
expect(serverConfig.data().hostname).toEqual(hostname);
|
||||
responseProcessed = true;
|
||||
}
|
||||
},
|
||||
};
|
||||
service.setHostnameForAccessKeys({params: {hostname}}, res, done);
|
||||
});
|
||||
it('Rejects missing hostname', (done) => {
|
||||
const serverConfig = new InMemoryConfig({} as ServerConfigJson);
|
||||
const service = new ShadowsocksManagerServiceBuilder()
|
||||
.serverConfig(serverConfig)
|
||||
.accessKeys(getAccessKeyRepository())
|
||||
.build();
|
||||
.serverConfig(serverConfig)
|
||||
.accessKeys(getAccessKeyRepository())
|
||||
.build();
|
||||
const res = {send: (httpCode) => {}};
|
||||
const next = (error) => {
|
||||
expect(error.statusCode).toEqual(400);
|
||||
|
|
@ -191,9 +217,9 @@ describe('ShadowsocksManagerService', () => {
|
|||
it('Rejects non-string hostname', (done) => {
|
||||
const serverConfig = new InMemoryConfig({} as ServerConfigJson);
|
||||
const service = new ShadowsocksManagerServiceBuilder()
|
||||
.serverConfig(serverConfig)
|
||||
.accessKeys(getAccessKeyRepository())
|
||||
.build();
|
||||
.serverConfig(serverConfig)
|
||||
.accessKeys(getAccessKeyRepository())
|
||||
.build();
|
||||
const res = {send: (httpCode) => {}};
|
||||
const next = (error) => {
|
||||
expect(error.statusCode).toEqual(400);
|
||||
|
|
@ -201,7 +227,7 @@ describe('ShadowsocksManagerService', () => {
|
|||
done();
|
||||
};
|
||||
// tslint:disable-next-line: no-any
|
||||
const badHostname = ({params: {hostname: 123}} as any) as {params: {hostname: string}};
|
||||
const badHostname = {params: {hostname: 123}} as any as {params: {hostname: string}};
|
||||
service.setHostnameForAccessKeys(badHostname, res, next);
|
||||
});
|
||||
});
|
||||
|
|
@ -222,8 +248,8 @@ describe('ShadowsocksManagerService', () => {
|
|||
expect(data.accessKeys[0].id).toEqual(key1.id);
|
||||
expect(data.accessKeys[1].name).toEqual(key2.name);
|
||||
expect(data.accessKeys[1].id).toEqual(key2.id);
|
||||
responseProcessed = true; // required for afterEach to pass.
|
||||
}
|
||||
responseProcessed = true; // required for afterEach to pass.
|
||||
},
|
||||
};
|
||||
service.listAccessKeys({params: {}}, res, done);
|
||||
});
|
||||
|
|
@ -243,8 +269,8 @@ describe('ShadowsocksManagerService', () => {
|
|||
expect(Object.keys(serviceAccessKey1).sort()).toEqual(EXPECTED_ACCESS_KEY_PROPERTIES);
|
||||
expect(Object.keys(serviceAccessKey2).sort()).toEqual(EXPECTED_ACCESS_KEY_PROPERTIES);
|
||||
expect(serviceAccessKey1.name).toEqual(accessKeyName);
|
||||
responseProcessed = true; // required for afterEach to pass.
|
||||
}
|
||||
responseProcessed = true; // required for afterEach to pass.
|
||||
},
|
||||
};
|
||||
service.listAccessKeys({params: {}}, res, done);
|
||||
});
|
||||
|
|
@ -260,8 +286,8 @@ describe('ShadowsocksManagerService', () => {
|
|||
send: (httpCode, data) => {
|
||||
expect(httpCode).toEqual(201);
|
||||
expect(Object.keys(data).sort()).toEqual(EXPECTED_ACCESS_KEY_PROPERTIES);
|
||||
responseProcessed = true; // required for afterEach to pass.
|
||||
}
|
||||
responseProcessed = true; // required for afterEach to pass.
|
||||
},
|
||||
};
|
||||
service.createNewAccessKey({params: {}}, res, done);
|
||||
});
|
||||
|
|
@ -273,7 +299,7 @@ describe('ShadowsocksManagerService', () => {
|
|||
const res = {send: (httpCode, data) => {}};
|
||||
service.createNewAccessKey({params: {}}, res, (error) => {
|
||||
expect(error.statusCode).toEqual(500);
|
||||
responseProcessed = true; // required for afterEach to pass.
|
||||
responseProcessed = true; // required for afterEach to pass.
|
||||
done();
|
||||
});
|
||||
});
|
||||
|
|
@ -283,15 +309,15 @@ describe('ShadowsocksManagerService', () => {
|
|||
const repo = getAccessKeyRepository();
|
||||
const serverConfig = new InMemoryConfig({} as ServerConfigJson);
|
||||
const service = new ShadowsocksManagerServiceBuilder()
|
||||
.serverConfig(serverConfig)
|
||||
.accessKeys(repo)
|
||||
.build();
|
||||
.serverConfig(serverConfig)
|
||||
.accessKeys(repo)
|
||||
.build();
|
||||
|
||||
const oldKey = await repo.createNewAccessKey();
|
||||
const res = {
|
||||
send: (httpCode) => {
|
||||
expect(httpCode).toEqual(204);
|
||||
}
|
||||
},
|
||||
};
|
||||
await service.setPortForNewAccessKeys({params: {port: NEW_PORT}}, res, () => {});
|
||||
const newKey = await repo.createNewAccessKey();
|
||||
|
|
@ -305,16 +331,16 @@ describe('ShadowsocksManagerService', () => {
|
|||
const repo = getAccessKeyRepository();
|
||||
const serverConfig = new InMemoryConfig({} as ServerConfigJson);
|
||||
const service = new ShadowsocksManagerServiceBuilder()
|
||||
.serverConfig(serverConfig)
|
||||
.accessKeys(repo)
|
||||
.build();
|
||||
.serverConfig(serverConfig)
|
||||
.accessKeys(repo)
|
||||
.build();
|
||||
|
||||
const res = {
|
||||
send: (httpCode) => {
|
||||
expect(httpCode).toEqual(204);
|
||||
expect(serverConfig.data().portForNewAccessKeys).toEqual(NEW_PORT);
|
||||
responseProcessed = true;
|
||||
}
|
||||
},
|
||||
};
|
||||
await service.setPortForNewAccessKeys({params: {port: NEW_PORT}}, res, done);
|
||||
});
|
||||
|
|
@ -323,16 +349,16 @@ describe('ShadowsocksManagerService', () => {
|
|||
const repo = getAccessKeyRepository();
|
||||
const serverConfig = new InMemoryConfig({} as ServerConfigJson);
|
||||
const service = new ShadowsocksManagerServiceBuilder()
|
||||
.serverConfig(serverConfig)
|
||||
.accessKeys(repo)
|
||||
.build();
|
||||
.serverConfig(serverConfig)
|
||||
.accessKeys(repo)
|
||||
.build();
|
||||
|
||||
const res = {
|
||||
send: (httpCode) => {
|
||||
fail(
|
||||
`setPortForNewAccessKeys should have failed with 400 Bad Request, instead succeeded with code ${
|
||||
httpCode}`);
|
||||
}
|
||||
`setPortForNewAccessKeys should have failed with 400 Bad Request, instead succeeded with code ${httpCode}`
|
||||
);
|
||||
},
|
||||
};
|
||||
const next = (error) => {
|
||||
// Bad Request
|
||||
|
|
@ -352,16 +378,16 @@ describe('ShadowsocksManagerService', () => {
|
|||
const repo = getAccessKeyRepository();
|
||||
const serverConfig = new InMemoryConfig({} as ServerConfigJson);
|
||||
const service = new ShadowsocksManagerServiceBuilder()
|
||||
.serverConfig(serverConfig)
|
||||
.accessKeys(repo)
|
||||
.build();
|
||||
.serverConfig(serverConfig)
|
||||
.accessKeys(repo)
|
||||
.build();
|
||||
|
||||
const res = {
|
||||
send: (httpCode) => {
|
||||
fail(
|
||||
`setPortForNewAccessKeys should have failed with 409 Conflict, instead succeeded with code ${
|
||||
httpCode}`);
|
||||
}
|
||||
`setPortForNewAccessKeys should have failed with 409 Conflict, instead succeeded with code ${httpCode}`
|
||||
);
|
||||
},
|
||||
};
|
||||
const next = (error) => {
|
||||
// Conflict
|
||||
|
|
@ -380,9 +406,9 @@ describe('ShadowsocksManagerService', () => {
|
|||
const repo = getAccessKeyRepository();
|
||||
const serverConfig = new InMemoryConfig({} as ServerConfigJson);
|
||||
const service = new ShadowsocksManagerServiceBuilder()
|
||||
.serverConfig(serverConfig)
|
||||
.accessKeys(repo)
|
||||
.build();
|
||||
.serverConfig(serverConfig)
|
||||
.accessKeys(repo)
|
||||
.build();
|
||||
|
||||
await service.createNewAccessKey({params: {}}, {send: () => {}}, () => {});
|
||||
await service.setPortForNewAccessKeys({params: {port: NEW_PORT}}, {send: () => {}}, () => {});
|
||||
|
|
@ -390,7 +416,7 @@ describe('ShadowsocksManagerService', () => {
|
|||
send: (httpCode) => {
|
||||
expect(httpCode).toEqual(204);
|
||||
responseProcessed = true;
|
||||
}
|
||||
},
|
||||
};
|
||||
|
||||
const firstKeyConnection = new net.Server();
|
||||
|
|
@ -405,17 +431,17 @@ describe('ShadowsocksManagerService', () => {
|
|||
const repo = getAccessKeyRepository();
|
||||
const serverConfig = new InMemoryConfig({} as ServerConfigJson);
|
||||
const service = new ShadowsocksManagerServiceBuilder()
|
||||
.serverConfig(serverConfig)
|
||||
.accessKeys(repo)
|
||||
.build();
|
||||
.serverConfig(serverConfig)
|
||||
.accessKeys(repo)
|
||||
.build();
|
||||
|
||||
const noPort = {params: {}};
|
||||
const res = {
|
||||
send: (httpCode) => {
|
||||
fail(
|
||||
`setPortForNewAccessKeys should have failed with 400 BadRequest, instead succeeded with code ${
|
||||
httpCode}`);
|
||||
}
|
||||
`setPortForNewAccessKeys should have failed with 400 BadRequest, instead succeeded with code ${httpCode}`
|
||||
);
|
||||
},
|
||||
};
|
||||
const next = (error) => {
|
||||
expect(error.statusCode).toEqual(400);
|
||||
|
|
@ -426,7 +452,10 @@ describe('ShadowsocksManagerService', () => {
|
|||
const nonNumericPort = {params: {port: 'abc'}};
|
||||
await service.setPortForNewAccessKeys(
|
||||
// tslint:disable-next-line: no-any
|
||||
(nonNumericPort as any) as {params: {port: number}}, res, next);
|
||||
nonNumericPort as any as {params: {port: number}},
|
||||
res,
|
||||
next
|
||||
);
|
||||
|
||||
responseProcessed = true;
|
||||
done();
|
||||
|
|
@ -446,8 +475,8 @@ describe('ShadowsocksManagerService', () => {
|
|||
const keys = repo.listAccessKeys();
|
||||
expect(keys.length).toEqual(1);
|
||||
expect(keys[0].id === key2.id);
|
||||
responseProcessed = true; // required for afterEach to pass.
|
||||
}
|
||||
responseProcessed = true; // required for afterEach to pass.
|
||||
},
|
||||
};
|
||||
// remove the 1st key.
|
||||
service.removeAccessKey({params: {id: key1.id}}, res, done);
|
||||
|
|
@ -460,7 +489,7 @@ describe('ShadowsocksManagerService', () => {
|
|||
const res = {send: (httpCode, data) => {}};
|
||||
service.removeAccessKey({params: {id: key.id}}, res, (error) => {
|
||||
expect(error.statusCode).toEqual(500);
|
||||
responseProcessed = true; // required for afterEach to pass.
|
||||
responseProcessed = true; // required for afterEach to pass.
|
||||
done();
|
||||
});
|
||||
});
|
||||
|
|
@ -479,8 +508,8 @@ describe('ShadowsocksManagerService', () => {
|
|||
send: (httpCode, data) => {
|
||||
expect(httpCode).toEqual(204);
|
||||
expect(key.name === NEW_NAME);
|
||||
responseProcessed = true; // required for afterEach to pass.
|
||||
}
|
||||
responseProcessed = true; // required for afterEach to pass.
|
||||
},
|
||||
};
|
||||
service.renameAccessKey({params: {id: key.id, name: NEW_NAME}}, res, done);
|
||||
});
|
||||
|
|
@ -492,7 +521,7 @@ describe('ShadowsocksManagerService', () => {
|
|||
const res = {send: (httpCode, data) => {}};
|
||||
service.renameAccessKey({params: {id: 123}}, res, (error) => {
|
||||
expect(error.statusCode).toEqual(400);
|
||||
responseProcessed = true; // required for afterEach to pass.
|
||||
responseProcessed = true; // required for afterEach to pass.
|
||||
done();
|
||||
});
|
||||
});
|
||||
|
|
@ -505,7 +534,7 @@ describe('ShadowsocksManagerService', () => {
|
|||
const res = {send: (httpCode, data) => {}};
|
||||
service.renameAccessKey({params: {id: key.id, name: 'newName'}}, res, (error) => {
|
||||
expect(error.statusCode).toEqual(500);
|
||||
responseProcessed = true; // required for afterEach to pass.
|
||||
responseProcessed = true; // required for afterEach to pass.
|
||||
done();
|
||||
});
|
||||
});
|
||||
|
|
@ -517,12 +546,14 @@ describe('ShadowsocksManagerService', () => {
|
|||
const service = new ShadowsocksManagerServiceBuilder().accessKeys(repo).build();
|
||||
const key = await repo.createNewAccessKey();
|
||||
const limit = {bytes: 1000};
|
||||
const res = {send: (httpCode) => {
|
||||
expect(httpCode).toEqual(204);
|
||||
expect(key.dataLimit.bytes).toEqual(1000);
|
||||
responseProcessed = true;
|
||||
done();
|
||||
}};
|
||||
const res = {
|
||||
send: (httpCode) => {
|
||||
expect(httpCode).toEqual(204);
|
||||
expect(key.dataLimit.bytes).toEqual(1000);
|
||||
responseProcessed = true;
|
||||
done();
|
||||
},
|
||||
};
|
||||
service.setAccessKeyDataLimit({params: {id: key.id, limit}}, res, () => {});
|
||||
});
|
||||
|
||||
|
|
@ -542,7 +573,7 @@ describe('ShadowsocksManagerService', () => {
|
|||
const repo = getAccessKeyRepository();
|
||||
const service = new ShadowsocksManagerServiceBuilder().accessKeys(repo).build();
|
||||
const keyId = (await repo.createNewAccessKey()).id;
|
||||
const limit = {bytes: "1"};
|
||||
const limit = {bytes: '1'};
|
||||
service.setAccessKeyDataLimit({params: {id: keyId, limit}}, {send: () => {}}, (error) => {
|
||||
expect(error.statusCode).toEqual(400);
|
||||
responseProcessed = true;
|
||||
|
|
@ -567,11 +598,15 @@ describe('ShadowsocksManagerService', () => {
|
|||
const service = new ShadowsocksManagerServiceBuilder().accessKeys(repo).build();
|
||||
await repo.createNewAccessKey();
|
||||
const limit: DataLimit = {bytes: 1000};
|
||||
service.setAccessKeyDataLimit({params: {id: "not an id", limit}}, {send: () => {}}, (error) => {
|
||||
expect(error.statusCode).toEqual(404);
|
||||
responseProcessed = true;
|
||||
done();
|
||||
});
|
||||
service.setAccessKeyDataLimit(
|
||||
{params: {id: 'not an id', limit}},
|
||||
{send: () => {}},
|
||||
(error) => {
|
||||
expect(error.statusCode).toEqual(404);
|
||||
responseProcessed = true;
|
||||
done();
|
||||
}
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
|
|
@ -582,19 +617,21 @@ describe('ShadowsocksManagerService', () => {
|
|||
const key = await repo.createNewAccessKey();
|
||||
repo.setAccessKeyDataLimit(key.id, {bytes: 1000});
|
||||
await repo.enforceAccessKeyDataLimits();
|
||||
const res = {send: (httpCode) => {
|
||||
expect(httpCode).toEqual(204);
|
||||
expect(key.dataLimit).toBeFalsy();
|
||||
responseProcessed = true;
|
||||
done();
|
||||
}};
|
||||
const res = {
|
||||
send: (httpCode) => {
|
||||
expect(httpCode).toEqual(204);
|
||||
expect(key.dataLimit).toBeFalsy();
|
||||
responseProcessed = true;
|
||||
done();
|
||||
},
|
||||
};
|
||||
service.removeAccessKeyDataLimit({params: {id: key.id}}, res, () => {});
|
||||
});
|
||||
it('returns 404 for a nonexistent key', async (done) => {
|
||||
const repo = getAccessKeyRepository();
|
||||
const service = new ShadowsocksManagerServiceBuilder().accessKeys(repo).build();
|
||||
await repo.createNewAccessKey();
|
||||
service.removeAccessKeyDataLimit({params: {id: "not an id"}}, {send: () => {}}, (error) => {
|
||||
service.removeAccessKeyDataLimit({params: {id: 'not an id'}}, {send: () => {}}, (error) => {
|
||||
expect(error.statusCode).toEqual(404);
|
||||
responseProcessed = true;
|
||||
done();
|
||||
|
|
@ -608,9 +645,9 @@ describe('ShadowsocksManagerService', () => {
|
|||
const repo = getAccessKeyRepository();
|
||||
spyOn(repo, 'setDefaultDataLimit');
|
||||
const service = new ShadowsocksManagerServiceBuilder()
|
||||
.serverConfig(serverConfig)
|
||||
.accessKeys(repo)
|
||||
.build();
|
||||
.serverConfig(serverConfig)
|
||||
.accessKeys(repo)
|
||||
.build();
|
||||
const limit = {bytes: 10000};
|
||||
const res = {
|
||||
send: (httpCode, data) => {
|
||||
|
|
@ -618,15 +655,17 @@ describe('ShadowsocksManagerService', () => {
|
|||
expect(serverConfig.data().accessKeyDataLimit).toEqual(limit);
|
||||
expect(repo.setDefaultDataLimit).toHaveBeenCalledWith(limit);
|
||||
service.getServer(
|
||||
{params: {}}, {
|
||||
send: (httpCode, data: ServerInfo) => {
|
||||
expect(httpCode).toEqual(200);
|
||||
expect(data.accessKeyDataLimit).toEqual(limit);
|
||||
responseProcessed = true; // required for afterEach to pass.
|
||||
}
|
||||
{params: {}},
|
||||
{
|
||||
send: (httpCode, data: ServerInfo) => {
|
||||
expect(httpCode).toEqual(200);
|
||||
expect(data.accessKeyDataLimit).toEqual(limit);
|
||||
responseProcessed = true; // required for afterEach to pass.
|
||||
},
|
||||
done);
|
||||
}
|
||||
},
|
||||
done
|
||||
);
|
||||
},
|
||||
};
|
||||
service.setDefaultDataLimit({params: {limit}}, res, done);
|
||||
});
|
||||
|
|
@ -638,7 +677,7 @@ describe('ShadowsocksManagerService', () => {
|
|||
const res = {send: (httpCode, data) => {}};
|
||||
service.setDefaultDataLimit({params: {limit}}, res, (error) => {
|
||||
expect(error.statusCode).toEqual(400);
|
||||
responseProcessed = true; // required for afterEach to pass.
|
||||
responseProcessed = true; // required for afterEach to pass.
|
||||
done();
|
||||
});
|
||||
});
|
||||
|
|
@ -650,7 +689,7 @@ describe('ShadowsocksManagerService', () => {
|
|||
const res = {send: (httpCode, data) => {}};
|
||||
service.setDefaultDataLimit({params: {limit}}, res, (error) => {
|
||||
expect(error.statusCode).toEqual(400);
|
||||
responseProcessed = true; // required for afterEach to pass.
|
||||
responseProcessed = true; // required for afterEach to pass.
|
||||
done();
|
||||
});
|
||||
});
|
||||
|
|
@ -663,7 +702,7 @@ describe('ShadowsocksManagerService', () => {
|
|||
const res = {send: (httpCode, data) => {}};
|
||||
service.setDefaultDataLimit({params: {limit}}, res, (error) => {
|
||||
expect(error.statusCode).toEqual(500);
|
||||
responseProcessed = true; // required for afterEach to pass.
|
||||
responseProcessed = true; // required for afterEach to pass.
|
||||
done();
|
||||
});
|
||||
});
|
||||
|
|
@ -672,21 +711,21 @@ describe('ShadowsocksManagerService', () => {
|
|||
describe('removeDefaultDataLimit', () => {
|
||||
it('clears default data limit', async (done) => {
|
||||
const limit = {bytes: 10000};
|
||||
const serverConfig = new InMemoryConfig({'accessKeyDataLimit': limit} as ServerConfigJson);
|
||||
const serverConfig = new InMemoryConfig({accessKeyDataLimit: limit} as ServerConfigJson);
|
||||
const repo = getAccessKeyRepository();
|
||||
spyOn(repo, 'removeDefaultDataLimit').and.callThrough();
|
||||
const service = new ShadowsocksManagerServiceBuilder()
|
||||
.serverConfig(serverConfig)
|
||||
.accessKeys(repo)
|
||||
.build();
|
||||
.serverConfig(serverConfig)
|
||||
.accessKeys(repo)
|
||||
.build();
|
||||
await repo.setDefaultDataLimit(limit);
|
||||
const res = {
|
||||
send: (httpCode, data) => {
|
||||
expect(httpCode).toEqual(204);
|
||||
expect(serverConfig.data().accessKeyDataLimit).toBeUndefined();
|
||||
expect(repo.removeDefaultDataLimit).toHaveBeenCalled();
|
||||
responseProcessed = true; // required for afterEach to pass.
|
||||
}
|
||||
responseProcessed = true; // required for afterEach to pass.
|
||||
},
|
||||
};
|
||||
service.removeDefaultDataLimit({params: {}}, res, done);
|
||||
});
|
||||
|
|
@ -698,7 +737,7 @@ describe('ShadowsocksManagerService', () => {
|
|||
const res = {send: (httpCode, data) => {}};
|
||||
service.removeDefaultDataLimit({params: {id: accessKey.id}}, res, (error) => {
|
||||
expect(error.statusCode).toEqual(500);
|
||||
responseProcessed = true; // required for afterEach to pass.
|
||||
responseProcessed = true; // required for afterEach to pass.
|
||||
done();
|
||||
});
|
||||
});
|
||||
|
|
@ -708,34 +747,40 @@ describe('ShadowsocksManagerService', () => {
|
|||
it('Returns value from sharedMetrics', (done) => {
|
||||
const sharedMetrics = fakeSharedMetricsReporter();
|
||||
sharedMetrics.startSharing();
|
||||
const service =
|
||||
new ShadowsocksManagerServiceBuilder().metricsPublisher(sharedMetrics).build();
|
||||
const service = new ShadowsocksManagerServiceBuilder()
|
||||
.metricsPublisher(sharedMetrics)
|
||||
.build();
|
||||
service.getShareMetrics(
|
||||
{params: {}}, {
|
||||
send: (httpCode, data: {metricsEnabled: boolean}) => {
|
||||
expect(httpCode).toEqual(200);
|
||||
expect(data.metricsEnabled).toEqual(true);
|
||||
responseProcessed = true;
|
||||
}
|
||||
{params: {}},
|
||||
{
|
||||
send: (httpCode, data: {metricsEnabled: boolean}) => {
|
||||
expect(httpCode).toEqual(200);
|
||||
expect(data.metricsEnabled).toEqual(true);
|
||||
responseProcessed = true;
|
||||
},
|
||||
done);
|
||||
},
|
||||
done
|
||||
);
|
||||
});
|
||||
});
|
||||
describe('setShareMetrics', () => {
|
||||
it('Sets value in the config', (done) => {
|
||||
const sharedMetrics = fakeSharedMetricsReporter();
|
||||
sharedMetrics.stopSharing();
|
||||
const service =
|
||||
new ShadowsocksManagerServiceBuilder().metricsPublisher(sharedMetrics).build();
|
||||
const service = new ShadowsocksManagerServiceBuilder()
|
||||
.metricsPublisher(sharedMetrics)
|
||||
.build();
|
||||
service.setShareMetrics(
|
||||
{params: {metricsEnabled: true}}, {
|
||||
send: (httpCode, _) => {
|
||||
expect(httpCode).toEqual(204);
|
||||
expect(sharedMetrics.isSharingEnabled()).toEqual(true);
|
||||
responseProcessed = true;
|
||||
}
|
||||
{params: {metricsEnabled: true}},
|
||||
{
|
||||
send: (httpCode, _) => {
|
||||
expect(httpCode).toEqual(204);
|
||||
expect(sharedMetrics.isSharingEnabled()).toEqual(true);
|
||||
responseProcessed = true;
|
||||
},
|
||||
done);
|
||||
},
|
||||
done
|
||||
);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
|
@ -746,7 +791,7 @@ describe('bindService', () => {
|
|||
let url: URL;
|
||||
const PREFIX = '/TestApiPrefix';
|
||||
|
||||
const fakeResponse = {'foo': 'bar'};
|
||||
const fakeResponse = {foo: 'bar'};
|
||||
const fakeHandler = async (req, res, next) => {
|
||||
res.send(200, fakeResponse);
|
||||
next();
|
||||
|
|
@ -764,7 +809,7 @@ describe('bindService', () => {
|
|||
});
|
||||
|
||||
it('basic routing', async () => {
|
||||
spyOn(service, "renameServer").and.callFake(fakeHandler);
|
||||
spyOn(service, 'renameServer').and.callFake(fakeHandler);
|
||||
bindService(server, PREFIX, service);
|
||||
|
||||
url.pathname = `${PREFIX}/name`;
|
||||
|
|
@ -776,7 +821,7 @@ describe('bindService', () => {
|
|||
});
|
||||
|
||||
it('parameterized routing', async () => {
|
||||
spyOn(service, "removeAccessKeyDataLimit").and.callFake(fakeHandler);
|
||||
spyOn(service, 'removeAccessKeyDataLimit').and.callFake(fakeHandler);
|
||||
bindService(server, PREFIX, service);
|
||||
|
||||
url.pathname = `${PREFIX}/access-keys/fake-access-key-id/data-limit`;
|
||||
|
|
@ -796,7 +841,7 @@ describe('bindService', () => {
|
|||
'/123TestApiPrefix',
|
||||
'/very-long-path-that-does-not-exist',
|
||||
`${PREFIX}/does-not-exist`,
|
||||
].forEach(path => {
|
||||
].forEach((path) => {
|
||||
it(`404 (${path})`, async () => {
|
||||
// Ensure no methods are called on the Service.
|
||||
spyOnAllFunctions(service);
|
||||
|
|
@ -810,7 +855,7 @@ describe('bindService', () => {
|
|||
expect(response.status).toEqual(404);
|
||||
expect(body).toEqual({
|
||||
code: 'ResourceNotFound',
|
||||
message: `${path} does not exist`
|
||||
message: `${path} does not exist`,
|
||||
});
|
||||
});
|
||||
});
|
||||
|
|
@ -819,7 +864,7 @@ describe('bindService', () => {
|
|||
it(`standard routing for authorized queries`, async () => {
|
||||
bindService(server, PREFIX, service);
|
||||
// Verify that ordinary routing goes through the Router.
|
||||
spyOn(server.router, "lookup").and.callThrough();
|
||||
spyOn(server.router, 'lookup').and.callThrough();
|
||||
|
||||
// This is an authorized request, so it will pass the prefix filter
|
||||
// and reach the Router.
|
||||
|
|
@ -833,13 +878,7 @@ describe('bindService', () => {
|
|||
|
||||
// Check that unauthorized queries are rejected without ever reaching
|
||||
// the routing stage.
|
||||
[
|
||||
'/',
|
||||
'/T',
|
||||
'/TestApiPre',
|
||||
'/TestApi123456',
|
||||
'/TestApi123456789',
|
||||
].forEach(path => {
|
||||
['/', '/T', '/TestApiPre', '/TestApi123456', '/TestApi123456789'].forEach((path) => {
|
||||
it(`no routing for unauthorized queries (${path})`, async () => {
|
||||
bindService(server, PREFIX, service);
|
||||
// Ensure no methods are called on the Router.
|
||||
|
|
@ -901,13 +940,19 @@ class ShadowsocksManagerServiceBuilder {
|
|||
|
||||
build(): ShadowsocksManagerService {
|
||||
return new ShadowsocksManagerService(
|
||||
this.defaultServerName_, this.serverConfig_, this.accessKeys_, this.managerMetrics_,
|
||||
this.metricsPublisher_);
|
||||
this.defaultServerName_,
|
||||
this.serverConfig_,
|
||||
this.accessKeys_,
|
||||
this.managerMetrics_,
|
||||
this.metricsPublisher_
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
async function createNewAccessKeyWithName(
|
||||
repo: AccessKeyRepository, name: string): Promise<AccessKey> {
|
||||
repo: AccessKeyRepository,
|
||||
name: string
|
||||
): Promise<AccessKey> {
|
||||
const accessKey = await repo.createNewAccessKey();
|
||||
try {
|
||||
repo.renameAccessKey(accessKey.id, name);
|
||||
|
|
@ -928,12 +973,16 @@ function fakeSharedMetricsReporter(): SharedMetricsPublisher {
|
|||
},
|
||||
isSharingEnabled(): boolean {
|
||||
return sharing;
|
||||
}
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
function getAccessKeyRepository(): ServerAccessKeyRepository {
|
||||
return new ServerAccessKeyRepository(
|
||||
OLD_PORT, 'hostname', new InMemoryConfig<AccessKeyConfigJson>({accessKeys: [], nextId: 0}),
|
||||
new FakeShadowsocksServer(), new FakePrometheusClient({}));
|
||||
OLD_PORT,
|
||||
'hostname',
|
||||
new InMemoryConfig<AccessKeyConfigJson>({accessKeys: [], nextId: 0}),
|
||||
new FakeShadowsocksServer(),
|
||||
new FakePrometheusClient({})
|
||||
);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -40,13 +40,15 @@ function accessKeyToApiJson(accessKey: AccessKey) {
|
|||
port: accessKey.proxyParams.portNumber,
|
||||
method: accessKey.proxyParams.encryptionMethod,
|
||||
dataLimit: accessKey.dataLimit,
|
||||
accessUrl: SIP002_URI.stringify(makeConfig({
|
||||
host: accessKey.proxyParams.hostname,
|
||||
port: accessKey.proxyParams.portNumber,
|
||||
method: accessKey.proxyParams.encryptionMethod,
|
||||
password: accessKey.proxyParams.password,
|
||||
outline: 1
|
||||
}))
|
||||
accessUrl: SIP002_URI.stringify(
|
||||
makeConfig({
|
||||
host: accessKey.proxyParams.hostname,
|
||||
port: accessKey.proxyParams.portNumber,
|
||||
method: accessKey.proxyParams.encryptionMethod,
|
||||
password: accessKey.proxyParams.password,
|
||||
outline: 1,
|
||||
})
|
||||
),
|
||||
};
|
||||
}
|
||||
|
||||
|
|
@ -99,30 +101,45 @@ function prefixFilter(apiPrefix: string): restify.RequestHandler {
|
|||
}
|
||||
|
||||
export function bindService(
|
||||
apiServer: restify.Server, apiPrefix: string, service: ShadowsocksManagerService) {
|
||||
apiServer: restify.Server,
|
||||
apiPrefix: string,
|
||||
service: ShadowsocksManagerService
|
||||
) {
|
||||
// Reject unauthorized requests in constant time before they reach the routing step.
|
||||
apiServer.pre(prefixFilter(apiPrefix));
|
||||
|
||||
apiServer.put(`${apiPrefix}/name`, service.renameServer.bind(service));
|
||||
apiServer.get(`${apiPrefix}/server`, service.getServer.bind(service));
|
||||
apiServer.put(
|
||||
`${apiPrefix}/server/access-key-data-limit`, service.setDefaultDataLimit.bind(service));
|
||||
`${apiPrefix}/server/access-key-data-limit`,
|
||||
service.setDefaultDataLimit.bind(service)
|
||||
);
|
||||
apiServer.del(
|
||||
`${apiPrefix}/server/access-key-data-limit`, service.removeDefaultDataLimit.bind(service));
|
||||
`${apiPrefix}/server/access-key-data-limit`,
|
||||
service.removeDefaultDataLimit.bind(service)
|
||||
);
|
||||
apiServer.put(
|
||||
`${apiPrefix}/server/hostname-for-access-keys`,
|
||||
service.setHostnameForAccessKeys.bind(service));
|
||||
`${apiPrefix}/server/hostname-for-access-keys`,
|
||||
service.setHostnameForAccessKeys.bind(service)
|
||||
);
|
||||
apiServer.put(
|
||||
`${apiPrefix}/server/port-for-new-access-keys`,
|
||||
service.setPortForNewAccessKeys.bind(service));
|
||||
`${apiPrefix}/server/port-for-new-access-keys`,
|
||||
service.setPortForNewAccessKeys.bind(service)
|
||||
);
|
||||
|
||||
apiServer.post(`${apiPrefix}/access-keys`, service.createNewAccessKey.bind(service));
|
||||
apiServer.get(`${apiPrefix}/access-keys`, service.listAccessKeys.bind(service));
|
||||
|
||||
apiServer.del(`${apiPrefix}/access-keys/:id`, service.removeAccessKey.bind(service));
|
||||
apiServer.put(`${apiPrefix}/access-keys/:id/name`, service.renameAccessKey.bind(service));
|
||||
apiServer.put(`${apiPrefix}/access-keys/:id/data-limit`, service.setAccessKeyDataLimit.bind(service));
|
||||
apiServer.del(`${apiPrefix}/access-keys/:id/data-limit`, service.removeAccessKeyDataLimit.bind(service));
|
||||
apiServer.put(
|
||||
`${apiPrefix}/access-keys/:id/data-limit`,
|
||||
service.setAccessKeyDataLimit.bind(service)
|
||||
);
|
||||
apiServer.del(
|
||||
`${apiPrefix}/access-keys/:id/data-limit`,
|
||||
service.removeAccessKeyDataLimit.bind(service)
|
||||
);
|
||||
|
||||
apiServer.get(`${apiPrefix}/metrics/transfer`, service.getDataUsage.bind(service));
|
||||
apiServer.get(`${apiPrefix}/metrics/enabled`, service.getShareMetrics.bind(service));
|
||||
|
|
@ -130,11 +147,13 @@ export function bindService(
|
|||
|
||||
// Redirect former experimental APIs
|
||||
apiServer.put(
|
||||
`${apiPrefix}/experimental/access-key-data-limit`,
|
||||
redirect(`${apiPrefix}/server/access-key-data-limit`));
|
||||
`${apiPrefix}/experimental/access-key-data-limit`,
|
||||
redirect(`${apiPrefix}/server/access-key-data-limit`)
|
||||
);
|
||||
apiServer.del(
|
||||
`${apiPrefix}/experimental/access-key-data-limit`,
|
||||
redirect(`${apiPrefix}/server/access-key-data-limit`));
|
||||
`${apiPrefix}/experimental/access-key-data-limit`,
|
||||
redirect(`${apiPrefix}/server/access-key-data-limit`)
|
||||
);
|
||||
}
|
||||
|
||||
// Returns a request handler that redirects a bound request path to `url` with HTTP status code 308.
|
||||
|
|
@ -150,20 +169,23 @@ function validateAccessKeyId(accessKeyId: unknown): string {
|
|||
throw new restifyErrors.MissingParameterError({statusCode: 400}, 'Parameter `id` is missing');
|
||||
} else if (typeof accessKeyId !== 'string') {
|
||||
throw new restifyErrors.InvalidArgumentError(
|
||||
{statusCode: 400}, 'Parameter `id` must be of type string');
|
||||
{statusCode: 400},
|
||||
'Parameter `id` must be of type string'
|
||||
);
|
||||
}
|
||||
return accessKeyId;
|
||||
}
|
||||
|
||||
function validateDataLimit(limit: unknown): DataLimit {
|
||||
if (!limit) {
|
||||
throw new restifyErrors.MissingParameterError(
|
||||
{statusCode: 400}, 'Missing `limit` parameter');
|
||||
throw new restifyErrors.MissingParameterError({statusCode: 400}, 'Missing `limit` parameter');
|
||||
}
|
||||
const bytes = (limit as DataLimit).bytes;
|
||||
if (!(Number.isInteger(bytes) && bytes >= 0)) {
|
||||
throw new restifyErrors.InvalidArgumentError(
|
||||
{statusCode: 400}, '`limit.bytes` must be an non-negative integer');
|
||||
{statusCode: 400},
|
||||
'`limit.bytes` must be an non-negative integer'
|
||||
);
|
||||
}
|
||||
return limit as DataLimit;
|
||||
}
|
||||
|
|
@ -173,20 +195,27 @@ function validateDataLimit(limit: unknown): DataLimit {
|
|||
// for each existing access key, with the port and password assigned for that access key.
|
||||
export class ShadowsocksManagerService {
|
||||
constructor(
|
||||
private defaultServerName: string, private serverConfig: JsonConfig<ServerConfigJson>,
|
||||
private accessKeys: AccessKeyRepository, private managerMetrics: ManagerMetrics,
|
||||
private metricsPublisher: SharedMetricsPublisher) {}
|
||||
private defaultServerName: string,
|
||||
private serverConfig: JsonConfig<ServerConfigJson>,
|
||||
private accessKeys: AccessKeyRepository,
|
||||
private managerMetrics: ManagerMetrics,
|
||||
private metricsPublisher: SharedMetricsPublisher
|
||||
) {}
|
||||
|
||||
public renameServer(req: RequestType, res: ResponseType, next: restify.Next): void {
|
||||
logging.debug(`renameServer request ${JSON.stringify(req.params)}`);
|
||||
const name = req.params.name;
|
||||
if (!name) {
|
||||
return next(new restifyErrors.MissingParameterError(
|
||||
{statusCode: 400}, 'Parameter `name` is missing'));
|
||||
return next(
|
||||
new restifyErrors.MissingParameterError({statusCode: 400}, 'Parameter `name` is missing')
|
||||
);
|
||||
}
|
||||
if (typeof name !== 'string' || name.length > 100) {
|
||||
next(new restifyErrors.InvalidArgumentError(
|
||||
`Requested server name should be a string <= 100 characters long. Got ${name}`));
|
||||
next(
|
||||
new restifyErrors.InvalidArgumentError(
|
||||
`Requested server name should be a string <= 100 characters long. Got ${name}`
|
||||
)
|
||||
);
|
||||
return;
|
||||
}
|
||||
this.serverConfig.data().name = name;
|
||||
|
|
@ -204,7 +233,7 @@ export class ShadowsocksManagerService {
|
|||
version,
|
||||
accessKeyDataLimit: this.serverConfig.data().accessKeyDataLimit,
|
||||
portForNewAccessKeys: this.serverConfig.data().portForNewAccessKeys,
|
||||
hostnameForAccessKeys: this.serverConfig.data().hostname
|
||||
hostnameForAccessKeys: this.serverConfig.data().hostname,
|
||||
});
|
||||
next();
|
||||
}
|
||||
|
|
@ -216,20 +245,28 @@ export class ShadowsocksManagerService {
|
|||
const hostname = req.params.hostname;
|
||||
if (typeof hostname === 'undefined') {
|
||||
return next(
|
||||
new restifyErrors.MissingParameterError({statusCode: 400}, 'hostname must be provided'));
|
||||
new restifyErrors.MissingParameterError({statusCode: 400}, 'hostname must be provided')
|
||||
);
|
||||
}
|
||||
if (typeof hostname !== 'string') {
|
||||
return next(new restifyErrors.InvalidArgumentError(
|
||||
return next(
|
||||
new restifyErrors.InvalidArgumentError(
|
||||
{statusCode: 400},
|
||||
`Expected hostname to be a string, instead got ${hostname} of type ${typeof hostname}`));
|
||||
`Expected hostname to be a string, instead got ${hostname} of type ${typeof hostname}`
|
||||
)
|
||||
);
|
||||
}
|
||||
// Hostnames can have any number of segments of alphanumeric characters and hyphens, separated
|
||||
// by periods. No segment may start or end with a hyphen.
|
||||
const hostnameRegex =
|
||||
/^([a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?\.)*[A-Za-z0-9]([A-Za-z0-9\-]*[A-Za-z0-9])?$/;
|
||||
/^([a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?\.)*[A-Za-z0-9]([A-Za-z0-9\-]*[A-Za-z0-9])?$/;
|
||||
if (!hostnameRegex.test(hostname) && !ipRegex({includeBoundaries: true}).test(hostname)) {
|
||||
return next(new restifyErrors.InvalidArgumentError(
|
||||
{statusCode: 400}, `Hostname ${hostname} isn't a valid hostname or IP address`));
|
||||
return next(
|
||||
new restifyErrors.InvalidArgumentError(
|
||||
{statusCode: 400},
|
||||
`Hostname ${hostname} isn't a valid hostname or IP address`
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
this.serverConfig.data().hostname = hostname;
|
||||
|
|
@ -268,18 +305,25 @@ export class ShadowsocksManagerService {
|
|||
}
|
||||
|
||||
// Sets the default ports for new access keys
|
||||
public async setPortForNewAccessKeys(req: RequestType, res: ResponseType, next: restify.Next):
|
||||
Promise<void> {
|
||||
public async setPortForNewAccessKeys(
|
||||
req: RequestType,
|
||||
res: ResponseType,
|
||||
next: restify.Next
|
||||
): Promise<void> {
|
||||
try {
|
||||
logging.debug(`setPortForNewAccessKeys request ${JSON.stringify(req.params)}`);
|
||||
const port = req.params.port;
|
||||
if (!port) {
|
||||
return next(new restifyErrors.MissingParameterError(
|
||||
{statusCode: 400}, 'Parameter `port` is missing'));
|
||||
return next(
|
||||
new restifyErrors.MissingParameterError({statusCode: 400}, 'Parameter `port` is missing')
|
||||
);
|
||||
} else if (typeof port !== 'number') {
|
||||
return next(new restifyErrors.InvalidArgumentError(
|
||||
return next(
|
||||
new restifyErrors.InvalidArgumentError(
|
||||
{statusCode: 400},
|
||||
`Expected a numeric port, instead got ${port} of type ${typeof port}`));
|
||||
`Expected a numeric port, instead got ${port} of type ${typeof port}`
|
||||
)
|
||||
);
|
||||
}
|
||||
await this.accessKeys.setPortForNewAccessKeys(port);
|
||||
this.serverConfig.data().portForNewAccessKeys = port;
|
||||
|
|
@ -322,11 +366,16 @@ export class ShadowsocksManagerService {
|
|||
const accessKeyId = validateAccessKeyId(req.params.id);
|
||||
const name = req.params.name;
|
||||
if (!name) {
|
||||
return next(new restifyErrors.MissingParameterError(
|
||||
{statusCode: 400}, 'Parameter `name` is missing'));
|
||||
return next(
|
||||
new restifyErrors.MissingParameterError({statusCode: 400}, 'Parameter `name` is missing')
|
||||
);
|
||||
} else if (typeof name !== 'string') {
|
||||
return next(new restifyErrors.InvalidArgumentError(
|
||||
{statusCode: 400}, 'Parameter `name` must be of type string'));
|
||||
return next(
|
||||
new restifyErrors.InvalidArgumentError(
|
||||
{statusCode: 400},
|
||||
'Parameter `name` must be of type string'
|
||||
)
|
||||
);
|
||||
}
|
||||
this.accessKeys.renameAccessKey(accessKeyId, name);
|
||||
res.send(HttpSuccess.NO_CONTENT);
|
||||
|
|
@ -352,7 +401,7 @@ export class ShadowsocksManagerService {
|
|||
this.accessKeys.setAccessKeyDataLimit(accessKeyId, limit);
|
||||
res.send(HttpSuccess.NO_CONTENT);
|
||||
return next();
|
||||
} catch(error) {
|
||||
} catch (error) {
|
||||
logging.error(error);
|
||||
if (error instanceof errors.AccessKeyNotFound) {
|
||||
return next(new restifyErrors.NotFoundError(error.message));
|
||||
|
|
@ -370,7 +419,7 @@ export class ShadowsocksManagerService {
|
|||
this.accessKeys.removeAccessKeyDataLimit(accessKeyId);
|
||||
res.send(HttpSuccess.NO_CONTENT);
|
||||
return next();
|
||||
} catch(error) {
|
||||
} catch (error) {
|
||||
logging.error(error);
|
||||
if (error instanceof errors.AccessKeyNotFound) {
|
||||
return next(new restifyErrors.NotFoundError(error.message));
|
||||
|
|
@ -392,7 +441,10 @@ export class ShadowsocksManagerService {
|
|||
return next();
|
||||
} catch (error) {
|
||||
logging.error(error);
|
||||
if (error instanceof restifyErrors.InvalidArgumentError || error instanceof restifyErrors.MissingParameterError) {
|
||||
if (
|
||||
error instanceof restifyErrors.InvalidArgumentError ||
|
||||
error instanceof restifyErrors.MissingParameterError
|
||||
) {
|
||||
return next(error);
|
||||
}
|
||||
return next(new restifyErrors.InternalServerError());
|
||||
|
|
@ -440,11 +492,19 @@ export class ShadowsocksManagerService {
|
|||
logging.debug(`setShareMetrics request ${JSON.stringify(req.params)}`);
|
||||
const metricsEnabled = req.params.metricsEnabled;
|
||||
if (metricsEnabled === undefined || metricsEnabled === null) {
|
||||
return next(new restifyErrors.MissingParameterError(
|
||||
{statusCode: 400}, 'Parameter `metricsEnabled` is missing'));
|
||||
return next(
|
||||
new restifyErrors.MissingParameterError(
|
||||
{statusCode: 400},
|
||||
'Parameter `metricsEnabled` is missing'
|
||||
)
|
||||
);
|
||||
} else if (typeof metricsEnabled !== 'boolean') {
|
||||
return next(new restifyErrors.InvalidArgumentError(
|
||||
{statusCode: 400}, 'Parameter `hours` must be an integer'));
|
||||
return next(
|
||||
new restifyErrors.InvalidArgumentError(
|
||||
{statusCode: 400},
|
||||
'Parameter `hours` must be an integer'
|
||||
)
|
||||
);
|
||||
}
|
||||
if (metricsEnabled) {
|
||||
this.metricsPublisher.startSharing();
|
||||
|
|
|
|||
|
|
@ -58,8 +58,10 @@ export class FakePrometheusClient extends PrometheusClient {
|
|||
const queryResultData = {result: []} as QueryResultData;
|
||||
for (const accessKeyId of Object.keys(this.bytesTransferredById)) {
|
||||
const bytesTransferred = this.bytesTransferredById[accessKeyId] || 0;
|
||||
queryResultData.result.push(
|
||||
{metric: {'access_key': accessKeyId}, value: [bytesTransferred, `${bytesTransferred}`]});
|
||||
queryResultData.result.push({
|
||||
metric: {access_key: accessKeyId},
|
||||
value: [bytesTransferred, `${bytesTransferred}`],
|
||||
});
|
||||
}
|
||||
return queryResultData;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -30,8 +30,11 @@ export class OutlineShadowsocksServer implements ShadowsocksServer {
|
|||
// binaryFilename is the location for the outline-ss-server binary.
|
||||
// configFilename is the location for the outline-ss-server config.
|
||||
constructor(
|
||||
private readonly binaryFilename: string, private readonly configFilename: string,
|
||||
private readonly verbose: boolean, private readonly metricsLocation: string) {}
|
||||
private readonly binaryFilename: string,
|
||||
private readonly configFilename: string,
|
||||
private readonly verbose: boolean,
|
||||
private readonly metricsLocation: string
|
||||
) {}
|
||||
|
||||
// Annotates the Prometheus data metrics with countries.
|
||||
// ipCountryFilename is the location of the ip-country.mmdb IP-to-country database file.
|
||||
|
|
@ -64,8 +67,9 @@ export class OutlineShadowsocksServer implements ShadowsocksServer {
|
|||
const keysJson = {keys: [] as ShadowsocksAccessKey[]};
|
||||
for (const key of keys) {
|
||||
if (!isAeadCipher(key.cipher)) {
|
||||
logging.error(`Cipher ${key.cipher} for access key ${
|
||||
key.id} is not supported: use an AEAD cipher instead.`);
|
||||
logging.error(
|
||||
`Cipher ${key.cipher} for access key ${key.id} is not supported: use an AEAD cipher instead.`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
|
|
@ -114,4 +118,4 @@ export class OutlineShadowsocksServer implements ShadowsocksServer {
|
|||
function isAeadCipher(cipherAlias: string) {
|
||||
cipherAlias = cipherAlias.toLowerCase();
|
||||
return cipherAlias.endsWith('gcm') || cipherAlias.endsWith('poly1305');
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -81,8 +81,9 @@ describe('ServerAccessKeyRepository', () => {
|
|||
const repo = new RepoBuilder().build();
|
||||
repo.createNewAccessKey().then((accessKey) => {
|
||||
const NEW_NAME = 'newName';
|
||||
expect(repo.renameAccessKey.bind(repo, 'badId', NEW_NAME))
|
||||
.toThrowError(errors.AccessKeyNotFound);
|
||||
expect(repo.renameAccessKey.bind(repo, 'badId', NEW_NAME)).toThrowError(
|
||||
errors.AccessKeyNotFound
|
||||
);
|
||||
// List keys again and expect to NOT see the NEW_NAME.
|
||||
const accessKeys = repo.listAccessKeys();
|
||||
expect(accessKeys[0].name).not.toEqual(NEW_NAME);
|
||||
|
|
@ -126,9 +127,13 @@ describe('ServerAccessKeyRepository', () => {
|
|||
await expectAsyncThrow(repo.setPortForNewAccessKeys.bind(repo, 0), errors.InvalidPortNumber);
|
||||
await expectAsyncThrow(repo.setPortForNewAccessKeys.bind(repo, -1), errors.InvalidPortNumber);
|
||||
await expectAsyncThrow(
|
||||
repo.setPortForNewAccessKeys.bind(repo, 100.1), errors.InvalidPortNumber);
|
||||
repo.setPortForNewAccessKeys.bind(repo, 100.1),
|
||||
errors.InvalidPortNumber
|
||||
);
|
||||
await expectAsyncThrow(
|
||||
repo.setPortForNewAccessKeys.bind(repo, 65536), errors.InvalidPortNumber);
|
||||
repo.setPortForNewAccessKeys.bind(repo, 65536),
|
||||
errors.InvalidPortNumber
|
||||
);
|
||||
done();
|
||||
});
|
||||
|
||||
|
|
@ -164,8 +169,8 @@ describe('ServerAccessKeyRepository', () => {
|
|||
done();
|
||||
});
|
||||
});
|
||||
|
||||
it('setAccessKeyDataLimit can set a custom data limit', async(done) => {
|
||||
|
||||
it('setAccessKeyDataLimit can set a custom data limit', async (done) => {
|
||||
const server = new FakeShadowsocksServer();
|
||||
const config = new InMemoryConfig<AccessKeyConfigJson>({accessKeys: [], nextId: 0});
|
||||
const repo = new RepoBuilder().shadowsocksServer(server).keyConfig(config).build();
|
||||
|
|
@ -178,18 +183,23 @@ describe('ServerAccessKeyRepository', () => {
|
|||
});
|
||||
|
||||
async function setKeyLimitAndEnforce(
|
||||
repo: ServerAccessKeyRepository, id: AccessKeyId, limit: DataLimit) {
|
||||
repo: ServerAccessKeyRepository,
|
||||
id: AccessKeyId,
|
||||
limit: DataLimit
|
||||
) {
|
||||
repo.setAccessKeyDataLimit(id, limit);
|
||||
// We enforce asynchronously, in setAccessKeyDataLimit, so explicitly call it here to make sure
|
||||
// enforcement is done before we make assertions.
|
||||
return repo.enforceAccessKeyDataLimits();
|
||||
}
|
||||
|
||||
it('setAccessKeyDataLimit can change a key\'s limit status', async(done) => {
|
||||
it("setAccessKeyDataLimit can change a key's limit status", async (done) => {
|
||||
const server = new FakeShadowsocksServer();
|
||||
const prometheusClient = new FakePrometheusClient({'0': 500});
|
||||
const repo =
|
||||
new RepoBuilder().prometheusClient(prometheusClient).shadowsocksServer(server).build();
|
||||
const repo = new RepoBuilder()
|
||||
.prometheusClient(prometheusClient)
|
||||
.shadowsocksServer(server)
|
||||
.build();
|
||||
await repo.start(new ManualClock());
|
||||
const key = await repo.createNewAccessKey();
|
||||
await setKeyLimitAndEnforce(repo, key.id, {bytes: 0});
|
||||
|
|
@ -206,12 +216,14 @@ describe('ServerAccessKeyRepository', () => {
|
|||
expect(serverKeys[0].id).toEqual(key.id);
|
||||
done();
|
||||
});
|
||||
|
||||
it('setAccessKeyDataLimit overrides default data limit', async(done) => {
|
||||
|
||||
it('setAccessKeyDataLimit overrides default data limit', async (done) => {
|
||||
const server = new FakeShadowsocksServer();
|
||||
const prometheusClient = new FakePrometheusClient({'0': 750, '1': 1250});
|
||||
const repo =
|
||||
new RepoBuilder().prometheusClient(prometheusClient).shadowsocksServer(server).build();
|
||||
const repo = new RepoBuilder()
|
||||
.prometheusClient(prometheusClient)
|
||||
.shadowsocksServer(server)
|
||||
.build();
|
||||
await repo.start(new ManualClock());
|
||||
const lowerLimitThanDefault = await repo.createNewAccessKey();
|
||||
const higherLimitThanDefault = await repo.createNewAccessKey();
|
||||
|
|
@ -242,8 +254,10 @@ describe('ServerAccessKeyRepository', () => {
|
|||
it('removeAccessKeyDataLimit restores a key to the default data limit', async (done) => {
|
||||
const server = new FakeShadowsocksServer();
|
||||
const prometheusClient = new FakePrometheusClient({'0': 500});
|
||||
const repo =
|
||||
new RepoBuilder().prometheusClient(prometheusClient).shadowsocksServer(server).build();
|
||||
const repo = new RepoBuilder()
|
||||
.prometheusClient(prometheusClient)
|
||||
.shadowsocksServer(server)
|
||||
.build();
|
||||
const key = await repo.createNewAccessKey();
|
||||
await repo.start(new ManualClock());
|
||||
await repo.setDefaultDataLimit({bytes: 0});
|
||||
|
|
@ -255,11 +269,13 @@ describe('ServerAccessKeyRepository', () => {
|
|||
done();
|
||||
});
|
||||
|
||||
it('setAccessKeyDataLimit can change a key\'s limit status', async (done) => {
|
||||
it("setAccessKeyDataLimit can change a key's limit status", async (done) => {
|
||||
const server = new FakeShadowsocksServer();
|
||||
const prometheusClient = new FakePrometheusClient({'0': 500});
|
||||
const repo =
|
||||
new RepoBuilder().prometheusClient(prometheusClient).shadowsocksServer(server).build();
|
||||
const repo = new RepoBuilder()
|
||||
.prometheusClient(prometheusClient)
|
||||
.shadowsocksServer(server)
|
||||
.build();
|
||||
await repo.start(new ManualClock());
|
||||
const key = await repo.createNewAccessKey();
|
||||
await setKeyLimitAndEnforce(repo, key.id, {bytes: 0});
|
||||
|
|
@ -280,8 +296,10 @@ describe('ServerAccessKeyRepository', () => {
|
|||
it('setAccessKeyDataLimit overrides default data limit', async (done) => {
|
||||
const server = new FakeShadowsocksServer();
|
||||
const prometheusClient = new FakePrometheusClient({'0': 750, '1': 1250});
|
||||
const repo =
|
||||
new RepoBuilder().prometheusClient(prometheusClient).shadowsocksServer(server).build();
|
||||
const repo = new RepoBuilder()
|
||||
.prometheusClient(prometheusClient)
|
||||
.shadowsocksServer(server)
|
||||
.build();
|
||||
await repo.start(new ManualClock());
|
||||
const lowerLimitThanDefault = await repo.createNewAccessKey();
|
||||
const higherLimitThanDefault = await repo.createNewAccessKey();
|
||||
|
|
@ -319,8 +337,10 @@ describe('ServerAccessKeyRepository', () => {
|
|||
it('removeAccessKeyDataLimit restores a key to the default data limit', async (done) => {
|
||||
const server = new FakeShadowsocksServer();
|
||||
const prometheusClient = new FakePrometheusClient({'0': 500});
|
||||
const repo =
|
||||
new RepoBuilder().prometheusClient(prometheusClient).shadowsocksServer(server).build();
|
||||
const repo = new RepoBuilder()
|
||||
.prometheusClient(prometheusClient)
|
||||
.shadowsocksServer(server)
|
||||
.build();
|
||||
const key = await repo.createNewAccessKey();
|
||||
await repo.start(new ManualClock());
|
||||
await repo.setDefaultDataLimit({bytes: 0});
|
||||
|
|
@ -332,11 +352,13 @@ describe('ServerAccessKeyRepository', () => {
|
|||
done();
|
||||
});
|
||||
|
||||
it('removeAccessKeyDataLimit can restore an over-limit access key', async(done) => {
|
||||
it('removeAccessKeyDataLimit can restore an over-limit access key', async (done) => {
|
||||
const server = new FakeShadowsocksServer();
|
||||
const prometheusClient = new FakePrometheusClient({'0': 500});
|
||||
const repo =
|
||||
new RepoBuilder().prometheusClient(prometheusClient).shadowsocksServer(server).build();
|
||||
const repo = new RepoBuilder()
|
||||
.prometheusClient(prometheusClient)
|
||||
.shadowsocksServer(server)
|
||||
.build();
|
||||
const key = await repo.createNewAccessKey();
|
||||
await repo.start(new ManualClock());
|
||||
|
||||
|
|
@ -349,7 +371,7 @@ describe('ServerAccessKeyRepository', () => {
|
|||
expect(server.getAccessKeys().length).toEqual(1);
|
||||
done();
|
||||
});
|
||||
|
||||
|
||||
it('can set default data limit', async (done) => {
|
||||
const repo = new RepoBuilder().build();
|
||||
const limit = {bytes: 5000};
|
||||
|
|
@ -361,8 +383,10 @@ describe('ServerAccessKeyRepository', () => {
|
|||
it('setDefaultDataLimit updates keys limit status', async (done) => {
|
||||
const server = new FakeShadowsocksServer();
|
||||
const prometheusClient = new FakePrometheusClient({'0': 500, '1': 200});
|
||||
const repo =
|
||||
new RepoBuilder().prometheusClient(prometheusClient).shadowsocksServer(server).build();
|
||||
const repo = new RepoBuilder()
|
||||
.prometheusClient(prometheusClient)
|
||||
.shadowsocksServer(server)
|
||||
.build();
|
||||
const accessKey1 = await repo.createNewAccessKey();
|
||||
const accessKey2 = await repo.createNewAccessKey();
|
||||
await repo.start(new ManualClock());
|
||||
|
|
@ -404,10 +428,10 @@ describe('ServerAccessKeyRepository', () => {
|
|||
const server = new FakeShadowsocksServer();
|
||||
const prometheusClient = new FakePrometheusClient({'0': 500, '1': 100});
|
||||
const repo = new RepoBuilder()
|
||||
.prometheusClient(prometheusClient)
|
||||
.shadowsocksServer(server)
|
||||
.defaultDataLimit({bytes: 200})
|
||||
.build();
|
||||
.prometheusClient(prometheusClient)
|
||||
.shadowsocksServer(server)
|
||||
.defaultDataLimit({bytes: 200})
|
||||
.build();
|
||||
|
||||
const accessKey1 = await repo.createNewAccessKey();
|
||||
const accessKey2 = await repo.createNewAccessKey();
|
||||
|
|
@ -426,34 +450,45 @@ describe('ServerAccessKeyRepository', () => {
|
|||
});
|
||||
|
||||
it('enforceAccessKeyDataLimits updates keys limit status', async (done) => {
|
||||
const prometheusClient =
|
||||
new FakePrometheusClient({'0': 100, '1': 200, '2': 300, '3': 400, '4': 500});
|
||||
const prometheusClient = new FakePrometheusClient({
|
||||
'0': 100,
|
||||
'1': 200,
|
||||
'2': 300,
|
||||
'3': 400,
|
||||
'4': 500,
|
||||
});
|
||||
const limit = {bytes: 250};
|
||||
const repo =
|
||||
new RepoBuilder().prometheusClient(prometheusClient).defaultDataLimit(limit).build();
|
||||
const repo = new RepoBuilder()
|
||||
.prometheusClient(prometheusClient)
|
||||
.defaultDataLimit(limit)
|
||||
.build();
|
||||
for (let i = 0; i < Object.keys(prometheusClient.bytesTransferredById).length; ++i) {
|
||||
await repo.createNewAccessKey();
|
||||
}
|
||||
await repo.enforceAccessKeyDataLimits();
|
||||
for (const key of repo.listAccessKeys()) {
|
||||
expect(key.isOverDataLimit)
|
||||
.toEqual(prometheusClient.bytesTransferredById[key.id] > limit.bytes);
|
||||
expect(key.isOverDataLimit).toEqual(
|
||||
prometheusClient.bytesTransferredById[key.id] > limit.bytes
|
||||
);
|
||||
}
|
||||
// Simulate a change in usage.
|
||||
prometheusClient.bytesTransferredById = {'0': 500, '1': 400, '2': 300, '3': 200, '4': 100};
|
||||
|
||||
await repo.enforceAccessKeyDataLimits();
|
||||
for (const key of repo.listAccessKeys()) {
|
||||
expect(key.isOverDataLimit)
|
||||
.toEqual(prometheusClient.bytesTransferredById[key.id] > limit.bytes);
|
||||
expect(key.isOverDataLimit).toEqual(
|
||||
prometheusClient.bytesTransferredById[key.id] > limit.bytes
|
||||
);
|
||||
}
|
||||
done();
|
||||
});
|
||||
|
||||
it('enforceAccessKeyDataLimits respects both default and per-key limits', async (done) => {
|
||||
const prometheusClient = new FakePrometheusClient({'0': 200, '1': 300});
|
||||
const repo =
|
||||
new RepoBuilder().prometheusClient(prometheusClient).defaultDataLimit({bytes: 500}).build();
|
||||
const repo = new RepoBuilder()
|
||||
.prometheusClient(prometheusClient)
|
||||
.defaultDataLimit({bytes: 500})
|
||||
.build();
|
||||
const perKeyLimited = await repo.createNewAccessKey();
|
||||
const defaultLimited = await repo.createNewAccessKey();
|
||||
await setKeyLimitAndEnforce(repo, perKeyLimited.id, {bytes: 100});
|
||||
|
|
@ -475,10 +510,10 @@ describe('ServerAccessKeyRepository', () => {
|
|||
const server = new FakeShadowsocksServer();
|
||||
const prometheusClient = new FakePrometheusClient({'0': 500, '1': 100});
|
||||
const repo = new RepoBuilder()
|
||||
.prometheusClient(prometheusClient)
|
||||
.shadowsocksServer(server)
|
||||
.defaultDataLimit({bytes: 200})
|
||||
.build();
|
||||
.prometheusClient(prometheusClient)
|
||||
.shadowsocksServer(server)
|
||||
.defaultDataLimit({bytes: 200})
|
||||
.build();
|
||||
|
||||
const accessKey1 = await repo.createNewAccessKey();
|
||||
const accessKey2 = await repo.createNewAccessKey();
|
||||
|
|
@ -552,8 +587,10 @@ describe('ServerAccessKeyRepository', () => {
|
|||
it('start periodically enforces access key data limits', async (done) => {
|
||||
const server = new FakeShadowsocksServer();
|
||||
const prometheusClient = new FakePrometheusClient({'0': 500, '1': 200, '2': 400});
|
||||
const repo =
|
||||
new RepoBuilder().prometheusClient(prometheusClient).shadowsocksServer(server).build();
|
||||
const repo = new RepoBuilder()
|
||||
.prometheusClient(prometheusClient)
|
||||
.shadowsocksServer(server)
|
||||
.build();
|
||||
const accessKey1 = await repo.createNewAccessKey();
|
||||
const accessKey2 = await repo.createNewAccessKey();
|
||||
const accessKey3 = await repo.createNewAccessKey();
|
||||
|
|
@ -650,7 +687,12 @@ class RepoBuilder {
|
|||
|
||||
public build(): ServerAccessKeyRepository {
|
||||
return new ServerAccessKeyRepository(
|
||||
this.port_, 'hostname', this.keyConfig_, this.shadowsocksServer_, this.prometheusClient_,
|
||||
this.defaultDataLimit_);
|
||||
this.port_,
|
||||
'hostname',
|
||||
this.keyConfig_,
|
||||
this.shadowsocksServer_,
|
||||
this.prometheusClient_,
|
||||
this.defaultDataLimit_
|
||||
);
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -20,7 +20,14 @@ import {isPortUsed} from '../infrastructure/get_port';
|
|||
import {JsonConfig} from '../infrastructure/json_config';
|
||||
import * as logging from '../infrastructure/logging';
|
||||
import {PrometheusClient} from '../infrastructure/prometheus_scraper';
|
||||
import {AccessKey, AccessKeyId, AccessKeyMetricsId, AccessKeyRepository, DataLimit, ProxyParams} from '../model/access_key';
|
||||
import {
|
||||
AccessKey,
|
||||
AccessKeyId,
|
||||
AccessKeyMetricsId,
|
||||
AccessKeyRepository,
|
||||
DataLimit,
|
||||
ProxyParams,
|
||||
} from '../model/access_key';
|
||||
import * as errors from '../model/errors';
|
||||
import {ShadowsocksServer} from '../model/shadowsocks_server';
|
||||
import {PrometheusManagerMetrics} from './manager_metrics';
|
||||
|
|
@ -47,8 +54,12 @@ export interface AccessKeyConfigJson {
|
|||
class ServerAccessKey implements AccessKey {
|
||||
public isOverDataLimit = false;
|
||||
constructor(
|
||||
readonly id: AccessKeyId, public name: string, public metricsId: AccessKeyMetricsId,
|
||||
readonly proxyParams: ProxyParams, public dataLimit?: DataLimit) {}
|
||||
readonly id: AccessKeyId,
|
||||
public name: string,
|
||||
public metricsId: AccessKeyMetricsId,
|
||||
readonly proxyParams: ProxyParams,
|
||||
public dataLimit?: DataLimit
|
||||
) {}
|
||||
}
|
||||
|
||||
// Generates a random password for Shadowsocks access keys.
|
||||
|
|
@ -64,7 +75,12 @@ function makeAccessKey(hostname: string, accessKeyJson: AccessKeyStorageJson): A
|
|||
password: accessKeyJson.password,
|
||||
};
|
||||
return new ServerAccessKey(
|
||||
accessKeyJson.id, accessKeyJson.name, accessKeyJson.metricsId, proxyParams, accessKeyJson.dataLimit);
|
||||
accessKeyJson.id,
|
||||
accessKeyJson.name,
|
||||
accessKeyJson.metricsId,
|
||||
proxyParams,
|
||||
accessKeyJson.dataLimit
|
||||
);
|
||||
}
|
||||
|
||||
function accessKeyToStorageJson(accessKey: AccessKey): AccessKeyStorageJson {
|
||||
|
|
@ -75,7 +91,7 @@ function accessKeyToStorageJson(accessKey: AccessKey): AccessKeyStorageJson {
|
|||
password: accessKey.proxyParams.password,
|
||||
port: accessKey.proxyParams.portNumber,
|
||||
encryptionMethod: accessKey.proxyParams.encryptionMethod,
|
||||
dataLimit: accessKey.dataLimit
|
||||
dataLimit: accessKey.dataLimit,
|
||||
};
|
||||
}
|
||||
|
||||
|
|
@ -83,15 +99,18 @@ function accessKeyToStorageJson(accessKey: AccessKey): AccessKeyStorageJson {
|
|||
// to start and stop per-access-key Shadowsocks instances. Requires external validation
|
||||
// that portForNewAccessKeys is valid.
|
||||
export class ServerAccessKeyRepository implements AccessKeyRepository {
|
||||
private static DATA_LIMITS_ENFORCEMENT_INTERVAL_MS = 60 * 60 * 1000; // 1h
|
||||
private static DATA_LIMITS_ENFORCEMENT_INTERVAL_MS = 60 * 60 * 1000; // 1h
|
||||
private NEW_USER_ENCRYPTION_METHOD = 'chacha20-ietf-poly1305';
|
||||
private accessKeys: ServerAccessKey[];
|
||||
|
||||
constructor(
|
||||
private portForNewAccessKeys: number, private proxyHostname: string,
|
||||
private keyConfig: JsonConfig<AccessKeyConfigJson>,
|
||||
private shadowsocksServer: ShadowsocksServer, private prometheusClient: PrometheusClient,
|
||||
private _defaultDataLimit?: DataLimit) {
|
||||
private portForNewAccessKeys: number,
|
||||
private proxyHostname: string,
|
||||
private keyConfig: JsonConfig<AccessKeyConfigJson>,
|
||||
private shadowsocksServer: ShadowsocksServer,
|
||||
private prometheusClient: PrometheusClient,
|
||||
private _defaultDataLimit?: DataLimit
|
||||
) {
|
||||
if (this.keyConfig.data().accessKeys === undefined) {
|
||||
this.keyConfig.data().accessKeys = [];
|
||||
}
|
||||
|
|
@ -114,7 +133,9 @@ export class ServerAccessKeyRepository implements AccessKeyRepository {
|
|||
await tryEnforceDataLimits();
|
||||
await this.updateServer();
|
||||
clock.setInterval(
|
||||
tryEnforceDataLimits, ServerAccessKeyRepository.DATA_LIMITS_ENFORCEMENT_INTERVAL_MS);
|
||||
tryEnforceDataLimits,
|
||||
ServerAccessKeyRepository.DATA_LIMITS_ENFORCEMENT_INTERVAL_MS
|
||||
);
|
||||
}
|
||||
|
||||
private isExistingAccessKeyPort(port: number): boolean {
|
||||
|
|
@ -131,7 +152,7 @@ export class ServerAccessKeyRepository implements AccessKeyRepository {
|
|||
if (!Number.isInteger(port) || port < 1 || port > 65535) {
|
||||
throw new errors.InvalidPortNumber(port.toString());
|
||||
}
|
||||
if (!this.isExistingAccessKeyPort(port) && await isPortUsed(port)) {
|
||||
if (!this.isExistingAccessKeyPort(port) && (await isPortUsed(port))) {
|
||||
throw new errors.PortUnavailable(port);
|
||||
}
|
||||
this.portForNewAccessKeys = port;
|
||||
|
|
@ -169,7 +190,7 @@ export class ServerAccessKeyRepository implements AccessKeyRepository {
|
|||
}
|
||||
|
||||
listAccessKeys(): AccessKey[] {
|
||||
return [...this.accessKeys]; // Return a copy of the access key array.
|
||||
return [...this.accessKeys]; // Return a copy of the access key array.
|
||||
}
|
||||
|
||||
renameAccessKey(id: AccessKeyId, name: string) {
|
||||
|
|
@ -190,7 +211,7 @@ export class ServerAccessKeyRepository implements AccessKeyRepository {
|
|||
this.enforceAccessKeyDataLimits();
|
||||
}
|
||||
|
||||
get defaultDataLimit(): DataLimit|undefined {
|
||||
get defaultDataLimit(): DataLimit | undefined {
|
||||
return this._defaultDataLimit;
|
||||
}
|
||||
|
||||
|
|
@ -204,7 +225,7 @@ export class ServerAccessKeyRepository implements AccessKeyRepository {
|
|||
this.enforceAccessKeyDataLimits();
|
||||
}
|
||||
|
||||
getMetricsId(id: AccessKeyId): AccessKeyMetricsId|undefined {
|
||||
getMetricsId(id: AccessKeyId): AccessKeyMetricsId | undefined {
|
||||
const accessKey = this.getAccessKey(id);
|
||||
return accessKey ? accessKey.metricsId : undefined;
|
||||
}
|
||||
|
|
@ -213,8 +234,8 @@ export class ServerAccessKeyRepository implements AccessKeyRepository {
|
|||
// Updates access key data usage.
|
||||
async enforceAccessKeyDataLimits() {
|
||||
const metrics = new PrometheusManagerMetrics(this.prometheusClient);
|
||||
const bytesTransferredById =
|
||||
(await metrics.getOutboundByteTransfer({hours: 30 * 24})).bytesTransferredByUserId;
|
||||
const bytesTransferredById = (await metrics.getOutboundByteTransfer({hours: 30 * 24}))
|
||||
.bytesTransferredByUserId;
|
||||
let limitStatusChanged = false;
|
||||
for (const accessKey of this.accessKeys) {
|
||||
const usageBytes = bytesTransferredById[accessKey.id] ?? 0;
|
||||
|
|
@ -232,23 +253,25 @@ export class ServerAccessKeyRepository implements AccessKeyRepository {
|
|||
}
|
||||
|
||||
private updateServer(): Promise<void> {
|
||||
const serverAccessKeys = this.accessKeys.filter(key => !key.isOverDataLimit).map(key => {
|
||||
return {
|
||||
id: key.id,
|
||||
port: key.proxyParams.portNumber,
|
||||
cipher: key.proxyParams.encryptionMethod,
|
||||
secret: key.proxyParams.password
|
||||
};
|
||||
});
|
||||
const serverAccessKeys = this.accessKeys
|
||||
.filter((key) => !key.isOverDataLimit)
|
||||
.map((key) => {
|
||||
return {
|
||||
id: key.id,
|
||||
port: key.proxyParams.portNumber,
|
||||
cipher: key.proxyParams.encryptionMethod,
|
||||
secret: key.proxyParams.password,
|
||||
};
|
||||
});
|
||||
return this.shadowsocksServer.update(serverAccessKeys);
|
||||
}
|
||||
|
||||
private loadAccessKeys(): AccessKey[] {
|
||||
return this.keyConfig.data().accessKeys.map(key => makeAccessKey(this.proxyHostname, key));
|
||||
return this.keyConfig.data().accessKeys.map((key) => makeAccessKey(this.proxyHostname, key));
|
||||
}
|
||||
|
||||
private saveAccessKeys() {
|
||||
this.keyConfig.data().accessKeys = this.accessKeys.map(key => accessKeyToStorageJson(key));
|
||||
this.keyConfig.data().accessKeys = this.accessKeys.map((key) => accessKeyToStorageJson(key));
|
||||
this.keyConfig.write();
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -19,15 +19,28 @@ import {version} from '../package.json';
|
|||
import {AccessKeyConfigJson} from './server_access_key';
|
||||
|
||||
import {ServerConfigJson} from './server_config';
|
||||
import {DailyFeatureMetricsReportJson, HourlyServerMetricsReportJson, KeyUsage, MetricsCollectorClient, OutlineSharedMetricsPublisher, UsageMetrics} from './shared_metrics';
|
||||
import {
|
||||
DailyFeatureMetricsReportJson,
|
||||
HourlyServerMetricsReportJson,
|
||||
KeyUsage,
|
||||
MetricsCollectorClient,
|
||||
OutlineSharedMetricsPublisher,
|
||||
UsageMetrics,
|
||||
} from './shared_metrics';
|
||||
|
||||
describe('OutlineSharedMetricsPublisher', () => {
|
||||
describe('Enable/Disable', () => {
|
||||
it('Mirrors config', () => {
|
||||
const serverConfig = new InMemoryConfig<ServerConfigJson>({});
|
||||
|
||||
const publisher =
|
||||
new OutlineSharedMetricsPublisher(new ManualClock(), serverConfig, null, null, null, null);
|
||||
const publisher = new OutlineSharedMetricsPublisher(
|
||||
new ManualClock(),
|
||||
serverConfig,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null
|
||||
);
|
||||
expect(publisher.isSharingEnabled()).toBeFalsy();
|
||||
|
||||
publisher.startSharing();
|
||||
|
|
@ -40,8 +53,14 @@ describe('OutlineSharedMetricsPublisher', () => {
|
|||
});
|
||||
it('Reads from config', () => {
|
||||
const serverConfig = new InMemoryConfig<ServerConfigJson>({metricsEnabled: true});
|
||||
const publisher =
|
||||
new OutlineSharedMetricsPublisher(new ManualClock(), serverConfig, null, null, null, null);
|
||||
const publisher = new OutlineSharedMetricsPublisher(
|
||||
new ManualClock(),
|
||||
serverConfig,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null
|
||||
);
|
||||
expect(publisher.isSharingEnabled()).toBeTruthy();
|
||||
});
|
||||
});
|
||||
|
|
@ -54,7 +73,13 @@ describe('OutlineSharedMetricsPublisher', () => {
|
|||
const toMetricsId = (id: AccessKeyId) => `M(${id})`;
|
||||
const metricsCollector = new FakeMetricsCollector();
|
||||
const publisher = new OutlineSharedMetricsPublisher(
|
||||
clock, serverConfig, null, usageMetrics, toMetricsId, metricsCollector);
|
||||
clock,
|
||||
serverConfig,
|
||||
null,
|
||||
usageMetrics,
|
||||
toMetricsId,
|
||||
metricsCollector
|
||||
);
|
||||
|
||||
publisher.startSharing();
|
||||
usageMetrics.usage = [
|
||||
|
|
@ -73,7 +98,7 @@ describe('OutlineSharedMetricsPublisher', () => {
|
|||
{userId: 'M(user-0)', bytesTransferred: 11, countries: ['AA', 'BB']},
|
||||
{userId: 'M(user-1)', bytesTransferred: 22, countries: ['CC']},
|
||||
{userId: 'M(user-0)', bytesTransferred: 33, countries: ['AA', 'DD']},
|
||||
]
|
||||
],
|
||||
});
|
||||
|
||||
startTime = clock.nowMs;
|
||||
|
|
@ -90,8 +115,8 @@ describe('OutlineSharedMetricsPublisher', () => {
|
|||
endUtcMs: clock.nowMs,
|
||||
userReports: [
|
||||
{userId: 'M(user-0)', bytesTransferred: 44, countries: ['EE']},
|
||||
{userId: 'M(user-2)', bytesTransferred: 55, countries: ['FF']}
|
||||
]
|
||||
{userId: 'M(user-2)', bytesTransferred: 55, countries: ['FF']},
|
||||
],
|
||||
});
|
||||
|
||||
publisher.stopSharing();
|
||||
|
|
@ -104,7 +129,13 @@ describe('OutlineSharedMetricsPublisher', () => {
|
|||
const toMetricsId = (id: AccessKeyId) => `M(${id})`;
|
||||
const metricsCollector = new FakeMetricsCollector();
|
||||
const publisher = new OutlineSharedMetricsPublisher(
|
||||
clock, serverConfig, null, usageMetrics, toMetricsId, metricsCollector);
|
||||
clock,
|
||||
serverConfig,
|
||||
null,
|
||||
usageMetrics,
|
||||
toMetricsId,
|
||||
metricsCollector
|
||||
);
|
||||
|
||||
publisher.startSharing();
|
||||
usageMetrics.usage = [
|
||||
|
|
@ -122,7 +153,7 @@ describe('OutlineSharedMetricsPublisher', () => {
|
|||
userReports: [
|
||||
{userId: 'M(user-1)', bytesTransferred: 22, countries: ['CC']},
|
||||
{userId: 'M(user-0)', bytesTransferred: 33, countries: ['AA', 'DD']},
|
||||
]
|
||||
],
|
||||
});
|
||||
publisher.stopSharing();
|
||||
});
|
||||
|
|
@ -130,28 +161,33 @@ describe('OutlineSharedMetricsPublisher', () => {
|
|||
it('reports feature metrics correctly', async () => {
|
||||
const clock = new ManualClock();
|
||||
let timestamp = clock.nowMs;
|
||||
const serverConfig = new InMemoryConfig<ServerConfigJson>(
|
||||
{serverId: 'server-id', accessKeyDataLimit: {bytes: 123}});
|
||||
const serverConfig = new InMemoryConfig<ServerConfigJson>({
|
||||
serverId: 'server-id',
|
||||
accessKeyDataLimit: {bytes: 123},
|
||||
});
|
||||
let keyId = 0;
|
||||
const makeKeyJson = (dataLimit?: DataLimit) => {
|
||||
return {
|
||||
id: (keyId++).toString(),
|
||||
metricsId: "id",
|
||||
name: "name",
|
||||
password: "pass",
|
||||
metricsId: 'id',
|
||||
name: 'name',
|
||||
password: 'pass',
|
||||
port: 12345,
|
||||
dataLimit,
|
||||
};
|
||||
};
|
||||
const keyConfig = new InMemoryConfig<AccessKeyConfigJson>({
|
||||
accessKeys: [
|
||||
makeKeyJson({bytes: 2}),
|
||||
makeKeyJson()
|
||||
]
|
||||
accessKeys: [makeKeyJson({bytes: 2}), makeKeyJson()],
|
||||
});
|
||||
const metricsCollector = new FakeMetricsCollector();
|
||||
const publisher = new OutlineSharedMetricsPublisher(
|
||||
clock, serverConfig, keyConfig, new ManualUsageMetrics(), (id: AccessKeyId) => '', metricsCollector);
|
||||
clock,
|
||||
serverConfig,
|
||||
keyConfig,
|
||||
new ManualUsageMetrics(),
|
||||
(id: AccessKeyId) => '',
|
||||
metricsCollector
|
||||
);
|
||||
|
||||
publisher.startSharing();
|
||||
await clock.runCallbacks();
|
||||
|
|
@ -161,8 +197,8 @@ describe('OutlineSharedMetricsPublisher', () => {
|
|||
timestampUtcMs: timestamp,
|
||||
dataLimit: {
|
||||
enabled: true,
|
||||
perKeyLimitCount: 1
|
||||
}
|
||||
perKeyLimitCount: 1,
|
||||
},
|
||||
});
|
||||
clock.nowMs += 24 * 60 * 60 * 1000;
|
||||
timestamp = clock.nowMs;
|
||||
|
|
@ -175,8 +211,8 @@ describe('OutlineSharedMetricsPublisher', () => {
|
|||
timestampUtcMs: timestamp,
|
||||
dataLimit: {
|
||||
enabled: false,
|
||||
perKeyLimitCount: 1
|
||||
}
|
||||
perKeyLimitCount: 1,
|
||||
},
|
||||
});
|
||||
|
||||
clock.nowMs += 24 * 60 * 60 * 1000;
|
||||
|
|
@ -186,13 +222,21 @@ describe('OutlineSharedMetricsPublisher', () => {
|
|||
});
|
||||
it('does not report metrics when sharing is disabled', async () => {
|
||||
const clock = new ManualClock();
|
||||
const serverConfig =
|
||||
new InMemoryConfig<ServerConfigJson>({serverId: 'server-id', metricsEnabled: false});
|
||||
const serverConfig = new InMemoryConfig<ServerConfigJson>({
|
||||
serverId: 'server-id',
|
||||
metricsEnabled: false,
|
||||
});
|
||||
const metricsCollector = new FakeMetricsCollector();
|
||||
spyOn(metricsCollector, 'collectServerUsageMetrics').and.callThrough();
|
||||
spyOn(metricsCollector, 'collectFeatureMetrics').and.callThrough();
|
||||
const publisher = new OutlineSharedMetricsPublisher(
|
||||
clock, serverConfig, new InMemoryConfig<AccessKeyConfigJson>({}), new ManualUsageMetrics(), (id: AccessKeyId) => '', metricsCollector);
|
||||
clock,
|
||||
serverConfig,
|
||||
new InMemoryConfig<AccessKeyConfigJson>({}),
|
||||
new ManualUsageMetrics(),
|
||||
(id: AccessKeyId) => '',
|
||||
metricsCollector
|
||||
);
|
||||
|
||||
await clock.runCallbacks();
|
||||
expect(metricsCollector.collectServerUsageMetrics).not.toHaveBeenCalled();
|
||||
|
|
|
|||
|
|
@ -12,7 +12,6 @@
|
|||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
|
||||
import {Clock} from '../infrastructure/clock';
|
||||
import * as follow_redirects from '../infrastructure/follow_redirects';
|
||||
import {JsonConfig} from '../infrastructure/json_config';
|
||||
|
|
@ -88,9 +87,9 @@ export class PrometheusUsageMetrics implements UsageMetrics {
|
|||
async getUsage(): Promise<KeyUsage[]> {
|
||||
const timeDeltaSecs = Math.round((Date.now() - this.resetTimeMs) / 1000);
|
||||
// We measure the traffic to and from the target, since that's what we are protecting.
|
||||
const result =
|
||||
await this.prometheusClient.query(`sum(increase(shadowsocks_data_bytes{dir=~"p>t|p<t"}[${
|
||||
timeDeltaSecs}s])) by (location, access_key)`);
|
||||
const result = await this.prometheusClient.query(
|
||||
`sum(increase(shadowsocks_data_bytes{dir=~"p>t|p<t"}[${timeDeltaSecs}s])) by (location, access_key)`
|
||||
);
|
||||
const usage = [] as KeyUsage[];
|
||||
for (const entry of result.result) {
|
||||
const accessKeyId = entry.metric['access_key'] || '';
|
||||
|
|
@ -130,13 +129,15 @@ export class RestMetricsCollectorClient {
|
|||
const options = {
|
||||
headers: {'Content-Type': 'application/json'},
|
||||
method: 'POST',
|
||||
body: reportJson
|
||||
body: reportJson,
|
||||
};
|
||||
const url = `${this.serviceUrl}${urlPath}`;
|
||||
logging.info(`Posting metrics to ${url} with options ${JSON.stringify(options)}`);
|
||||
try {
|
||||
const response =
|
||||
await follow_redirects.requestFollowRedirectsWithSameMethodAndBody(url, options);
|
||||
const response = await follow_redirects.requestFollowRedirectsWithSameMethodAndBody(
|
||||
url,
|
||||
options
|
||||
);
|
||||
if (!response.ok) {
|
||||
throw new Error(`Got status ${response.status}`);
|
||||
}
|
||||
|
|
@ -158,11 +159,13 @@ export class OutlineSharedMetricsPublisher implements SharedMetricsPublisher {
|
|||
// toMetricsId: maps Access key ids to metric ids
|
||||
// metricsUrl: where to post the metrics
|
||||
constructor(
|
||||
private clock: Clock, private serverConfig: JsonConfig<ServerConfigJson>,
|
||||
private keyConfig: JsonConfig<AccessKeyConfigJson>,
|
||||
usageMetrics: UsageMetrics,
|
||||
private toMetricsId: (accessKeyId: AccessKeyId) => AccessKeyMetricsId,
|
||||
private metricsCollector: MetricsCollectorClient) {
|
||||
private clock: Clock,
|
||||
private serverConfig: JsonConfig<ServerConfigJson>,
|
||||
private keyConfig: JsonConfig<AccessKeyConfigJson>,
|
||||
usageMetrics: UsageMetrics,
|
||||
private toMetricsId: (accessKeyId: AccessKeyId) => AccessKeyMetricsId,
|
||||
private metricsCollector: MetricsCollectorClient
|
||||
) {
|
||||
// Start timer
|
||||
this.reportStartTimestampMs = this.clock.now();
|
||||
|
||||
|
|
@ -219,14 +222,14 @@ export class OutlineSharedMetricsPublisher implements SharedMetricsPublisher {
|
|||
userReports.push({
|
||||
userId: this.toMetricsId(keyUsage.accessKeyId) || '',
|
||||
bytesTransferred: keyUsage.inboundBytes,
|
||||
countries: [...keyUsage.countries]
|
||||
countries: [...keyUsage.countries],
|
||||
});
|
||||
}
|
||||
const report = {
|
||||
serverId: this.serverConfig.data().serverId,
|
||||
startUtcMs: this.reportStartTimestampMs,
|
||||
endUtcMs: reportEndTimestampMs,
|
||||
userReports
|
||||
userReports,
|
||||
} as HourlyServerMetricsReportJson;
|
||||
|
||||
this.reportStartTimestampMs = reportEndTimestampMs;
|
||||
|
|
@ -244,8 +247,8 @@ export class OutlineSharedMetricsPublisher implements SharedMetricsPublisher {
|
|||
timestampUtcMs: this.clock.now(),
|
||||
dataLimit: {
|
||||
enabled: !!this.serverConfig.data().accessKeyDataLimit,
|
||||
perKeyLimitCount: keys.filter(key => !!key.dataLimit).length
|
||||
}
|
||||
perKeyLimitCount: keys.filter((key) => !!key.dataLimit).length,
|
||||
},
|
||||
};
|
||||
await this.metricsCollector.collectFeatureMetrics(featureMetricsReport);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1 +1 @@
|
|||
{"users":[]}
|
||||
{"users": []}
|
||||
|
|
|
|||
|
|
@ -9,13 +9,6 @@
|
|||
"resolveJsonModule": true,
|
||||
"sourceMap": true
|
||||
},
|
||||
"include": [
|
||||
"server/main.ts",
|
||||
"**/*.spec.ts",
|
||||
"types/**/*.d.ts"
|
||||
],
|
||||
"exclude": [
|
||||
"build",
|
||||
"node_modules"
|
||||
]
|
||||
}
|
||||
"include": ["server/main.ts", "**/*.spec.ts", "types/**/*.d.ts"],
|
||||
"exclude": ["build", "node_modules"]
|
||||
}
|
||||
|
|
|
|||
10
src/shadowbox/types/node.d.ts
vendored
10
src/shadowbox/types/node.d.ts
vendored
|
|
@ -21,9 +21,11 @@ declare module 'dns' {
|
|||
|
||||
// https://nodejs.org/dist/latest-v8.x/docs/api/child_process.html#child_process_child_process_exec_command_options_callback
|
||||
declare module 'child_process' {
|
||||
export interface ExecError { code: number; }
|
||||
export interface ExecError {
|
||||
code: number;
|
||||
}
|
||||
export function exec(
|
||||
command: string,
|
||||
callback?: (error: ExecError|undefined, stdout: string, stderr: string) =>
|
||||
void): ChildProcess;
|
||||
command: string,
|
||||
callback?: (error: ExecError | undefined, stdout: string, stderr: string) => void
|
||||
): ChildProcess;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -26,14 +26,14 @@ const config = {
|
|||
module: {rules: [{test: /\.ts(x)?$/, use: 'ts-loader'}]},
|
||||
node: {
|
||||
// Use the regular node behavior, the directory name of the output file when run.
|
||||
__dirname: false
|
||||
__dirname: false,
|
||||
},
|
||||
plugins: [
|
||||
// WORKAROUND: some of our (transitive) dependencies use node-gently, which hijacks `require`.
|
||||
// Setting global.GENTLY to false makes these dependencies use standard require.
|
||||
new webpack.DefinePlugin({'global.GENTLY': false})
|
||||
new webpack.DefinePlugin({'global.GENTLY': false}),
|
||||
],
|
||||
resolve: {extensions: ['.tsx', '.ts', '.js']}
|
||||
resolve: {extensions: ['.tsx', '.ts', '.js']},
|
||||
};
|
||||
|
||||
module.exports = config;
|
||||
|
|
|
|||
15
third_party/shellcheck/README.md
vendored
15
third_party/shellcheck/README.md
vendored
|
|
@ -1,10 +1,11 @@
|
|||
# Outline Shellcheck Wrapper
|
||||
|
||||
This directory is used to lint our scripts using [Shellcheck](https://www.shellcheck.net/). To ensure consistency across developer systems, the included script
|
||||
* Attempts to identify the developer's OS (Linux, macOS, or Windows)
|
||||
* Downloads a pinned version of Shellcheck into `./download`
|
||||
* Checks the archive hash
|
||||
* Extracts the executable
|
||||
* Runs the executable
|
||||
This directory is used to lint our scripts using [Shellcheck](https://www.shellcheck.net/). To ensure consistency across developer systems, the included script
|
||||
|
||||
The executable is cached on the developer's system after the first download. To clear the cache, run `rm download` (or `npm run clean` in the repository root).
|
||||
- Attempts to identify the developer's OS (Linux, macOS, or Windows)
|
||||
- Downloads a pinned version of Shellcheck into `./download`
|
||||
- Checks the archive hash
|
||||
- Extracts the executable
|
||||
- Runs the executable
|
||||
|
||||
The executable is cached on the developer's system after the first download. To clear the cache, run `rm download` (or `npm run clean` in the repository root).
|
||||
|
|
|
|||
|
|
@ -6,9 +6,9 @@
|
|||
"noImplicitThis": true,
|
||||
"moduleResolution": "Node",
|
||||
"sourceMap": true,
|
||||
"experimentalDecorators":true,
|
||||
"experimentalDecorators": true,
|
||||
"allowJs": true,
|
||||
"resolveJsonModule": true,
|
||||
"noUnusedLocals": true,
|
||||
"noUnusedLocals": true
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -3,7 +3,8 @@
|
|||
"rules": {
|
||||
"array-type": [true, "array-simple"],
|
||||
"arrow-return-shorthand": true,
|
||||
"ban-types": [true,
|
||||
"ban-types": [
|
||||
true,
|
||||
["Object", "Use {} instead."],
|
||||
["String", "Use 'string' instead."],
|
||||
["Number", "Use 'number' instead."],
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue